id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
63,718,197
https://en.wikipedia.org/wiki/Intrinsic%20motivation%20%28artificial%20intelligence%29
Intrinsic motivation in the study of artificial intelligence and any robotics is a mechanism for enabling artificial agents (including robots) to exhibit inherently rewarding behaviours such as exploration and curiosity, grouped under the same term in the study of psychology. Psychologists consider intrinsic motivation in humans to be the drive to perform an activity for inherent satisfaction – just for the fun or challenge of it. Definition An intelligent agent is intrinsically motivated to act if the information content alone, or the experience resulting from the action, is the motivating factor. Information content in this context is measured in the information-theoretic sense of quantifying uncertainty. A typical intrinsic motivation is to search for unusual, surprising situations (exploration), in contrast to a typical extrinsic motivation such as the search for food (homeostasis). Extrinsic motivations are typically described in artificial intelligence as task-dependent or goal-directed. Origins in psychology The study of intrinsic motivation in psychology and neuroscience began in the 1950s with some psychologists explaining exploration through drives to manipulate and explore, however, this homeostatic view was criticised by White. An alternative explanation from Berlyne in 1960 was the pursuit of an optimal balance between novelty and familiarity. Festinger described the difference between internal and external view of the world as dissonance that organisms are motivated to reduce. A similar view was expressed in the '70s by Kagan as the desire to reduce the incompatibility between cognitive structure and experience. In contrast to the idea of optimal incongruity, Deci and Ryan identified in the mid 80's an intrinsic motivation based on competence and self-determination. Computational models An influential early computational approach to implement artificial curiosity in the early 1990s by Schmidhuber, has since been developed into a "Formal theory of creativity, fun, and intrinsic motivation”. Intrinsic motivation is often studied in the framework of computational reinforcement learning (introduced by Sutton and Barto), where the rewards that drive agent behaviour are intrinsically derived rather than externally imposed and must be learnt from the environment. Reinforcement learning is agnostic to how the reward is generated - an agent will learn a policy (action strategy) from the distribution of rewards afforded by actions and the environment. Each approach to intrinsic motivation in this scheme is essentially a different way of generating the reward function for the agent. Curiosity vs. exploration Intrinsically motivated artificial agents exhibit behaviour that resembles curiosity or exploration. Exploration in artificial intelligence and robotics has been extensively studied in reinforcement learning models, usually by encouraging the agent to explore as much of the environment as possible, to reduce uncertainty about the dynamics of the environment (learning the transition function) and how best to achieve its goals (learning the reward function). Intrinsic motivation, in contrast, encourages the agent to first explore aspects of the environment that confer more information, to seek out novelty. Recent work unifying state visit count exploration and intrinsic motivation has shown faster learning in a video game setting. Types of models Ouedeyer and Kaplan have made a substantial contribution to the study of intrinsic motivation. They define intrinsic motivation based on Berlyne's theory, and divide approaches to the implementation of intrinsic motivation into three categories that broadly follow the roots in psychology: "knowledge-based models", "competence-based models" and "morphological models". Knowledge-based models are further subdivided into "information-theoretic" and "predictive". Baldassare and Mirolli present a similar typology, differentiating knowledge-based models between prediction-based and novelty-based. Information-theoretic intrinsic motivation The quantification of prediction and novelty to drive behaviour is generally enabled through the application of information-theoretic models, where agent state and strategy (policy) over time are represented by probability distributions describing a markov decision process and the cycle of perception and action treated as an information channel. These approaches claim biological feasibility as part of a family of bayesian approaches to brain function. The main criticism and difficulty of these models is the intractability of computing probability distributions over large discrete or continuous state spaces. Nonetheless, a considerable body of work has built up modelling the flow of information around the sensorimotor cycle, leading to de facto reward functions derived from the reduction of uncertainty, including most notably active inference, but also infotaxis, predictive information, and empowerment. Competence-based models Steels' autotelic principle is an attempt to formalise flow (psychology). Achievement, affiliation and power models Other intrinsic motives that have been modelled computationally include achievement, affiliation and power motivation. These motives can be implemented as functions of probability of success or incentive. Populations of agents can include individuals with different profiles of achievement, affiliation and power motivation, modelling population diversity and explaining why different individuals take different actions when faced with the same situation. Beyond achievement, affiliation and power A more recent computational theory of intrinsic motivation attempts to explain a large variety of psychological findings based on such motives. Notably this model of intrinsic motivation goes beyond just achievement, affiliation and power, by taking into consideration other important human motives. Empirical data from psychology were computationally simulated and accounted for using this model. Intrinsically Motivated Learning Intrinsically motivated (or curiosity-driven) learning is an emerging research topic in artificial intelligence and developmental robotics that aims to develop agents that can learn general skills or behaviours, that can be deployed to improve performance in extrinsic tasks, such as acquiring resources. Intrinsically motivated learning has been studied as an approach to autonomous lifelong learning in machines and open-ended learning in computer game characters. In particular, when the agent learns a meaningful abstract representation, a notion of distance between two representations can be used to gauge novelty, hence allowing for an efficient exploration of its environment. Despite the impressive success of deep learning in specific domains (e.g. AlphaGo), many in the field (e.g. Gary Marcus) have pointed out that the ability to generalise remains a fundamental challenge in artificial intelligence. Intrinsically motivated learning, although promising in terms of being able to generate goals from the structure of the environment without externally imposed tasks, faces the same challenge of generalisation – how to reuse policies or action sequences, how to compress and represent continuous or complex state spaces and retain and reuse the salient features that have been learnt. See also Reinforcement Learning Markov decision process Motivation Predictive coding Perceptual control theory References Artificial intelligence Cognitive science Robotics engineering
Intrinsic motivation (artificial intelligence)
[ "Technology", "Engineering" ]
1,305
[ "Computer engineering", "Robotics engineering" ]
76,832,553
https://en.wikipedia.org/wiki/Lantheus%20Holdings
Lantheus, headquartered in Billerica, Massachusetts, is a company in the radiopharmaceuticals business. It has strategic partnerships with Bayer, Novartis, Regeneron as well as GE Healthcare and Siemens Healthineers. Lantheus Holding, which became a NASDAQ company in 2015, is the parent company of Lantheus Medical Imaging, Inc. (formerly BMS Medical Imaging), Progenics Pharmaceuticals (acquired 2020), Inc. and EXINI Diagnostics AB (est. 1999, acquired 2020). Lantheus has offices in Massachusetts, New Jersey, Canada and Sweden. References External links Radiopharmaceuticals Companies listed on the Nasdaq Companies based in Billerica, Massachusetts
Lantheus Holdings
[ "Chemistry" ]
155
[ "Chemicals in medicine", "Radiopharmaceuticals", "Medicinal radiochemistry" ]
76,847,003
https://en.wikipedia.org/wiki/Apple%20M4
Apple M4 is a series of ARM-based system on a chip (SoC) designed by Apple Inc., part of the Apple silicon series, including a central processing unit (CPU), a graphics processing unit (GPU), a neural processing unit (NPU), and a digital signal processor (DSP). The M4 chip was introduced in May 2024 for the iPad Pro (7th generation), and is the fourth generation of the M series Apple silicon architecture, succeeding the Apple M3. It was followed by the professional-focused M4 Pro and M4 Max in October 2024. The M4 series is built upon TSMC's second-generation 3-nanometer process and contains 28 billion transistors. Design The base M4 features a 10-core design made up of four performance cores and six efficiency cores (with one performance core disabled on binned models). The SoC also includes a 10-core GPU (with hardware-accelerated ray tracing, dynamic caching, and mesh shading introduced with the M3), as well as a 16-core NPU. The M4 Neural Engine has been significantly improved compared to its predecessor, with the advertised capability to perform up to 38 trillion operations per second, claimed to be more than double the advertised performance of the M3. The M4 NPU performs over 60× faster than the A11 Bionic, and is approximately 3× faster than the original M1. The M4 is packaged with LPDDR5X unified memory, supporting 120GB/sec of memory bandwidth. The SoC is currently offered in 8GB,16GB, 24GB, and 32GB configurations. It is also Apple's first SoC to use the ARMv9 CPU architecture (specifically ARMv9.2-A). M4 Pro The M4 Pro features an up to 14-core CPU, with 10 performance cores and 4 efficiency cores, along with up to a 20-core GPU that Apple claims is twice as powerful as that in the M4 when used in the corresponding MacBook Pro. The M4 Pro is available with up to 64GB unified memory (Mac Mini) with a theoretical maximum bandwidth of 273GB/sec. M4 Max The M4 Max chip comes with up to 16 CPU cores, 40 GPU cores, and 16 Neural Engine cores, addressing up to 128GB unified memory with over half a terabyte per second (546GB/sec) of memory bandwidth. Performance Apple claims up to 50% more CPU performance and 4× more GPU performance on the M4 compared to the M2. The M4 competes for the highest-scoring consumer SoC for single-core benchmarks according to various sources such as the Geekbench benchmarking suite and Passmark Software's CPU benchmarks. Compared to other modern CPUs, the M4 does not outperform the M3 Pro in multi-core performance but it does in single-core performance and competes with AMD's Ryzen 7 9700X and Intel's Core i9-14900K. In multithreaded performance, the M4 performs similarly to the 12-core M3 Pro. Additional features The M4 is the first iPad SoC to support hardware-accelerated AV1 decoding, as well as hardware-accelerated mesh shading and ray tracing introduced to MacBooks in the M3. A new display controller has also been implemented to support the iPad Pro (7th generation)'s Tandem OLED display. Products that use the Apple M4 series M4 iPad Pro (7th generation) iMac (2024) Mac Mini (2024) MacBook Pro (14-inch, 2024) M4 Pro Mac Mini (2024) MacBook Pro (14-inch and 16-inch, 2024) M4 Max MacBook Pro (14-inch and 16-inch, 2024) Comparison with other SoCs The table below shows comparable SoCs Notes References Apple silicon Computer-related introductions in 2024
Apple M4
[ "Technology", "Engineering" ]
817
[]
58,519,634
https://en.wikipedia.org/wiki/Schwarzschild%27s%20equation%20for%20radiative%20transfer
In the study of heat transfer, Schwarzschild's equation is used to calculate radiative transfer (energy transfer via electromagnetic radiation) through a medium in local thermodynamic equilibrium that both absorbs and emits radiation. The incremental change in spectral intensity, (, [W/sr/m2/μm]) at a given wavelength as radiation travels an incremental distance () through a non-scattering medium is given by: where is the number density of absorbing/emitting molecules (units: molecules/volume) is their absorption cross-section at wavelength (units: area) is the Planck function for temperature and wavelength (units: power/area/solid angle/wavelength - e.g. watts/cm2/sr/cm) is the spectral intensity of the radiation entering the increment  with the same units as This equation and various equivalent expressions are known as Schwarzschild's equation. The second term describes absorption of radiation by the molecules in a short segment of the radiation's path () and the first term describes emission by those same molecules. In a non-homogeneous medium, these parameters can vary with altitude and location along the path, formally making these terms , , , and . Additional terms are added when scattering is important. Integrating the change in spectral intensity [W/sr/m2/μm] over all relevant wavelengths gives the change in intensity [W/sr/m2]. Integrating over a hemisphere then affords the flux perpendicular to a plane (, [W/m2]). Schwarzschild's equation is the formula by which you may calculate the intensity of any flux of electromagnetic energy after passage through a non-scattering medium when all variables are fixed, provided we know the temperature, pressure, and composition of the medium. History The Schwarzschild equation first appeared in Karl Schwarzschild's 1906 paper “Ueber das Gleichgewicht der Sonnenatmosphäre” (On the equilibrium of the solar atmosphere). Background Radiative transfer refers to energy transfer through an atmosphere or other medium by means of electromagnetic waves or (equivalently) photons.  The simplest form of radiative transfer involves a collinear beam of radiation traveling through a sample to a detector.  That flux can be reduced by absorption, scattering or reflection, resulting in energy transmission over a path of less than 100%.  The concept of radiative transfer extends beyond simple laboratory phenomena to include thermal emission of radiation by the medium - which can result in more photons arriving at the end of a path than entering it.  It also deals with radiation arriving at a detector from a large source - such as the surface of the Earth or the sky.  Since emission can occur in all directions, atmospheric radiative transfer (like Planck's Law) requires units involving a solid angle, such as W/sr/m2.   At the most fundamental level, the absorption and emission of radiation are controlled by the Einstein coefficients for absorption, emission and stimulated emission of a photon (, and ) and the density of molecules in the ground and excited states ( and ). However, in the simplest physical situation – blackbody radiation – radiation and the medium through which it is passing are in thermodynamic equilibrium, and the rate of absorption and emission are equal. The spectral intensity [W/sr/m2/μm] and intensity [W/sr/m2] of blackbody radiation are given by the Planck function and the Stefan–Boltzmann law. These expressions are independent of Einstein coefficients. Absorption and emission often reach equilibrium inside dense, non-transparent materials, so such materials often emit thermal infrared of nearly blackbody intensity. Some of that radiation is internally reflected or scattered at a surface, producing emissivity less than 1. The same phenomena makes the absorptivity of incoming radiation less than 1 and equal to emissivity (Kirchhoff's law). When radiation has not passed far enough through a homogeneous medium for emission and absorption to reach thermodynamic equilibrium or when the medium changes with distance, Planck's Law and the Stefan-Boltzmann equation do not apply. This is often the case when dealing with atmospheres. If a medium is in Local Thermodynamic Equilibrium (LTE), then Schwarzschild's equation can be used to calculate how radiation changes as it travels through the medium. A medium is in LTE when the fraction of molecules in an excited state is determined by the Boltzmann distribution. LTE exists when collisional excitation and collisional relaxation of any excited state occur much faster than absorption and emission. (LTE does not require the rates of absorption and emission to be equal.) The vibrational and rotational excited states of greenhouse gases that emit thermal infrared radiation are in LTE up to about 60 km. Radiative transfer calculations show negligible change (0.2%) due to absorption and emission above about 50 km. Schwarzschild's equation therefore is appropriate for most problems involving thermal infrared in the Earth's atmosphere. The absorption cross-sections () used in Schwarzschild's equation arise from Einstein coefficients and processes that broaden absorption lines. In practice, these quantities have been measured in the laboratory; not derived from theory. When radiation is scattered (the phenomena that makes the sky appear blue) or when the fraction of molecules in an excited state is not determined by the Boltzmann distribution (and LTE doesn't exist), more complicated equations are required. For example, scattering from clear skies reflects about 32 W/m2 (about 13%) of incoming solar radiation back to space. Visible light is also reflected and scattered by aerosol particles and water droplets (clouds). Neither of these phenomena have a significant impact on the flux of thermal infrared through clear skies. Schwarzschild's equation can not be used without first specifying the temperature, pressure, and composition of the medium through which radiation is traveling. When these parameters are first measured with a radiosonde, the observed spectrum of the downward flux of thermal infrared (DLR) agrees closely with calculations and varies dramatically with location. Where dI is negative, absorption is greater than emission, and net effect is to locally warm the atmosphere. Where dI is positive, the net effect is "radiative cooling". By repeated approximation, Schwarzschild's equation can be used to calculate the equilibrium temperature change caused by an increase in GHGs, but only in the upper atmosphere where heat transport by convection is unimportant. Derivation Schwarzschild's equation can be derived from Kirchhoff's law of thermal radiation, which states that absorptivity must equal emissivity at a given wavelength. (Like Schwarzschild's equation, Kirchhoff's law only applies to media in LTE.) Given a thin slab of atmosphere of incremental thickness , by definition its absorptivity is where is the incident radiation and is radiation absorbed by the slab. According to Beer's Law: Also by definition, emissivity is equal to where is the radiation emitted by the slab and is the maximum radiation any object in LTE can emit. Setting absorptivity equal to emissivity affords: The total change in radiation, , passing through the slab is given by: Schwarzschild's equation has also been derived from Einstein coefficients by assuming a Maxwell–Boltzmann distribution of energy between a ground and excited state (LTE). The oscillator strength for any transition between ground and excited state depends on these coefficients. The absorption cross-section () is empirically determined from this oscillator strength and the broadening of the absorption/emission line by collisions, the Doppler effect and the uncertainty principle. Equivalent equations Schwarzschild's equation has been expressed in different forms and symbols by different authors. The quantity is known as the absorption coefficient (), a measure of attenuation with units of [cm−1]. The absorption coefficient is fundamentally the product of a quantity of absorbers per unit volume, [cm−3], times an efficiency of absorption (area/absorber, [cm2]). Several sources replace with , where is the absorption coefficient per unit density and is the density of the gas. The absorption coefficient for spectral flux (a beam of radiation with a single wavelength, [W/m2/μm]) differs from the absorption coefficient for spectral intensity [W/sr/m2/μm] used in Schwarzschild's equation. Integration of an absorption coefficient over a path from and affords the optical thickness () of that path, a dimensionless quantity that is used in some variants of the Schwarzschild equation. When emission is ignored, the incoming radiation is reduced by a factor for when transmitted over a path with an optical thickness of 1. When expressed in terms of optical thickness, Schwarzschild's equation becomes: After integrating between a sensor located at and an arbitrary starting point in the medium, , the spectral intensity of the radiation reaching the sensor, , is: where {{math|I(τ''')}} is the spectral intensity of the radiation at the beginning of the path, is the transmittance along the path, and the final term is the sum of all of the emission along the path attenuated by absorption along the path yet to be traveled. Relationship to Planck's and Beer's laws Both Beer's Law and Planck's Law can be derived from Schwarzschild's equation. In a sense, they are corollaries of Schwarzschild's equation. When the spectral intensity of radiation is not changing as it passes through a medium, . In that situation, Schwarzschild's equation simplifies to Planck's law: When , is negative and when , is positive. As a consequence, the intensity of radiation traveling through any medium is always approaching the blackbody intensity given by Planck's law and the local temperature. The rate of approach depends on the density of absorbing/emitting molecules () and their absorption cross-section (). When the intensity of the incoming radiation, , is much greater than the intensity of blackbody radiation, , the emission term can be neglected. This is usually the case when working with a laboratory spectrophotometer, where the sample is near 300 K and the light source is a filament at several thousand K. If the medium is homogeneous, doesn't vary with location. Integration over a path of length affords the form of Beer's Law used most often in the laboratory experiments: Application to Climate Science 'If no other fluxes change, the law of conservation of energy demands that the Earth warm (from one steady state to another) until balance is restored between inward and outward fluxes. Schwarzschild's equation alone says nothing about how much warming would be required to restore balance. When meteorologists and climate scientists refer to "radiative transfer calculations" or "radiative transfer equations" (RTE), the phenomena of emission and absorption are handled by numerical integration of Schwarzschild's equation over a path through the atmosphere. Weather forecasting models and climate models use versions of Schwarzschild's equation optimized to minimize computation time. Online programs are available that perform computations using Schwarzschild's equation. Schwarzschild's equation is used to calculate the outward radiative flux from the Earth (measured in W/m2 perpendicular to the surface) at any altitude, especially the "top of the atmosphere" or TOA. This flux originates at the surface () for clear skies or cloud tops. increments are calculated for layers thin enough to be effectively homogeneous in composition and flux (). These increments are numerically integrated from the surface to the TOA to give the flux of thermal infrared to space, commonly referred to as outgoing long-wavelength radiation (OLR). OLR is the only mechanism by which the Earth gets rid of the heat delivered continuously by the sun. The net downward radiative flux of thermal IR (DLR) produced by emission from GHGs in the atmosphere is obtained by integrating dI from the TOA (where I0 is zero) to the surface. DLR adds to the energy from the sun. Emission from each layer adds equally to the upward and downward fluxes. In contrast, different amounts of radiation are absorbed, because the upward flux entering any layer is usually greater than the downward flux. In "line-by-line" methods, the change in spectral intensity (, W/sr/m2/μm) is numerically integrated using a wavelength increment small enough (less than 1 nm) to accurately describe the shape of each absorption line. The HITRAN database contains the parameters needed to describe 7.4 million absorption lines for 47 GHGs and 120 isotopologues. A variety of programs or radiative transfer codes can be used to process this data, including an online facility, SpectralCalc. To reduce the computational demand, weather forecast and climate models use broad-band methods that handle many lines as a single "band". MODTRAN is a broad-band method available online with a simple interface that anyone can use. To convert intensity [W/sr/m2] to flux [W/m2], calculations usually invoke the "two-stream" and "plane parallel" approximations. The radiative flux is decomposed into three components, upward (+z), downward (-z), and parallel to the surface. This third component contributes nothing to heating or cooling the planet. , where is the zenith angle (away from vertical). Then the upward and downward intensities are integrated over a forward hemisphere, a process that can be simplified by using a "diffusivity factor" or "average effective zenith angle" of 53°. Alternatively, one can integrate over all possible paths from the entire surface to a sensor positioned a specified height above surface for OLR, or over all possible paths from the TOA to a sensor on the surface for DLR. Greenhouse Effect Schwarzschild's equation provides a simple explanation for the existence of the greenhouse effect and demonstrates that it requires a non-zero lapse rate. Rising air in the atmosphere expands and cools as the pressure on it falls, producing a negative temperature gradient in the Earth's troposphere. When radiation travels upward through falling temperature, the incoming radiation, I, (emitted by the warmer surface or by GHGs at lower altitudes) is more intense than that emitted locally by . is generally less than zero throughout the troposphere, and the intensity of outward radiation decreases as it travels upward. According to Schwarzschild's equation, the rate of fall in outward intensity is proportional to the density of GHGs () in the atmosphere and their absorption cross-sections (). Any anthropogenic increase in GHGs will slow down the rate of radiative cooling to space, i.e. produce a radiative forcing until a saturation point is reached. At steady state, incoming and outgoing radiation at the top of the atmosphere (TOA) must be equal. When the presence of GHGs in the atmosphere causes outward radiation to decrease with altitude, then the surface must be warmer than it would be without GHGs - assuming nothing else changed. Some scientists quantify the greenhouse effect as the 150 W/m2 difference between the average outward flux of thermal IR from the surface (390 W/m2) and the average outward flux at the TOA. If the Earth had an isothermal atmosphere, Schwarzschild's equation predicts that there would be no greenhouse effect or no enhancement of the greenhouse effect by rising GHGs. In fact, the troposphere over the Antarctic plateau is nearly isothermal. Both observations and calculations show a slight "negative greenhouse effect" – more radiation emitted from the TOA than the surface. Although records are limited, the central Antarctic Plateau has seen little or no warming. Saturation In the absence of thermal emission, wavelengths that are strongly absorbed by GHGs can be significantly attenuated within 10 m in the lower atmosphere. Those same wavelengths, however, are the ones where emission is also strongest. In an extreme case, roughly 90% of 667.5 cm−1 photons are absorbed within 1 meter by 400 ppm of CO2 at surface density, but they are replaced by emission of an equal number of 667.5 cm−1 photons. The radiation field thereby maintains the blackbody intensity appropriate for the local temperature. At equilibrium, and therefore even when the density of the GHG (n) increases. This has led some to falsely believe that Schwarzschild's equation predicts no radiative forcing at wavelengths where absorption is "saturated". However, such reasoning reflects what some refer to as the surface budget fallacy. This fallacy involves reaching erroneous conclusions by focusing on energy exchange near the planetary surface rather than at the top of the atmosphere (TOA). At wavelengths where absorption is saturated, increasing the concentration of a greenhouse gas does not change thermal radiation levels at low altitudes, but there are still important differences at high altitudes where the air is thinner. As density decreases with altitude, even the strongest absorption bands eventually become semi-transparent. Once that happens, radiation can travel far enough that the local emission, , can differ from the absorption of incoming . The altitude where the transition to semi-transparency occurs is referred to as the "effective emission altitude" or "effective radiating level." Thermal radiation from this altitude is able to escape to space. Consequently, the temperature at this level sets the intensity of outgoing longwave radiation. This altitude varies depending on the particular wavelength involved. Increasing concentration increases the "effective emission altitude" at which emitted thermal radiation is able to escape to space. The lapse rate (change in temperature with altitude) at the effective radiating level determines how a change in concentration will affect outgoing emissions to space. For most wavelengths, this level is in the troposphere, where temperatures decrease with increasing altitude. This means that increasing concentrations of greenhouse gas lead to decreasing emissions to space (a positive incremental greenhouse effect), creating an energy imbalance that makes the planet warmer than it would be otherwise. Thus, the presence or absence of absorption saturation at low altitudes does not necessarily indicate that absence of radiative forcing in response to increased concentrations. The radiative forcing from doubling carbon dioxide occurs mostly on the flanks of the strongest absorption band. Temperature rises with altitude in the lower stratosphere, and increasing there increases radiative cooling to space and is predicted by some to cause cooling above 14–20 km. References Radiometry
Schwarzschild's equation for radiative transfer
[ "Engineering" ]
3,884
[ "Telecommunications engineering", "Radiometry" ]
68,025,443
https://en.wikipedia.org/wiki/3-Quinuclidinyl%20thiochromane-4-carboxylate
3-Quinuclidinyl thiochromane-4-carboxylate is a research compound which is the most potent muscarinic antagonist known. Tests in vitro showed it to have a binding affinity over 1000 times more potent than 3-quinuclidinyl benzilate with a Kd of 2.47 picomolars (pM). See also CS-27349 EA-3167 Metixene References Muscarinic antagonists Thiochromanes 3-Quinuclidinyl esters
3-Quinuclidinyl thiochromane-4-carboxylate
[ "Chemistry" ]
116
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
68,026,297
https://en.wikipedia.org/wiki/Slide%20rule%20scale
A slide rule scale is a line with graduated markings inscribed along the length of a slide rule used for mathematical calculations. The earliest such device had a single logarithmic scale for performing multiplication and division, but soon an improved technique was developed which involved two such scales sliding alongside each other. Later, multiple scales were provided with the most basic being logarithmic but with others graduated according to the mathematical function required. Few slide rules have been designed for addition and subtraction, rather the main scales are used for multiplication and division and the other scales are for mathematical calculations involving trigonometric, exponential and, generally, transcendental functions. Before they were superseded by electronic calculators in the 1970s, slide rules were an important type of portable calculating instrument. Slide rule design A slide rule consists of a body and a slider that can be slid along within the body and both of these have numerical scales inscribed on them. On duplex rules the body and/or the slider have scales on the back as well as the front. The slider's scales may be visible from the back or the slider may need to be slid right out and replaced facing the other way round. A cursor (also called runner or glass) containing one (or more) hairlines may be slid along the whole rule so that corresponding readings, front and back, can be taken from the various scales on the body and slider. History In about 1620, Edmund Gunter introduced what is now known as Gunter's line as one element of the Gunter's sector he invented for mariners. The line, inscribed on wood, was a single logarithmic scale going from 1 to 100. It had no sliding parts but by using a pair of dividers it was possible to multiply and divide numbers. The form with a single logarithmic scale eventually developed into such instruments as Fuller's cylindrical slide rule. In about 1622, but not published until 1632, William Oughtred invented linear and circular slide rules which had two logarithmic scales that slid beside each other to perform calculations. In 1654 the linear design was developed into a wooden body within which a slider could be fitted and adjusted. Scales Simple slide rules will have a C and D scale for multiplication and division, most likely an A and B for squares and square roots, and possibly CI and K for reciprocals and cubes. In the early days of slide rules few scales were provided and no labelling was necessary. However, gradually the number of scales tended to increase. Amédée Mannheim introduced the A, B, C and D labels in 1859 and, after that, manufacturers began to adopt a somewhat standardised, though idiosyncratic, system of labels so the various scales could be quickly identified. Advanced slide rules have many scales and they are often designed with particular types of user in mind, for example electrical engineers or surveyors. There are rarely scales for addition and subtraction but a workaround is possible. The rule illustrated is an Aristo 0972 HyperLog, which has 31 scales.{{refn|group=note|The Aristo 0952 HyperLog was being manufactured in 1973 and is in length overall with scales as follows. Front: LL00, LL01, LL02, LL03, DF (on the slider CF, CIF, L, CI, C) D, LL3, LL2, LL1 and LL00. Back: H2, Sh2, Th, K, A (on the slider B, T, ST, S, P, C) D, DI, Ch, Sh1, H1. Its gauge marks are , , {{strong|ρ}}, , , .}} The scales in the table below are those appropriate for general mathematical use rather than for specific professions. Notes about table Some scales have high values at the left and low on the right. These are marked as "decrease" in the table above. On slide rules these are often inscribed in red rather than black or they may have arrows pointing left along the scale. See P and DI scales in detail image. In slide rule terminology, "folded" means a scale that starts and finishes at values offset from a power of 10. Often folded scales start at π but may be extended lengthways to, say, 3.0 and 35.0. Folded scales with the code subscripted with "M" start and finish at log10 e to simplify conversion between base-10 and natural logarithms. When subscripted "/M", they fold at ln(10). For mathematical reasons some scales either stop short of or extend beyond the D = 1 and 10 points. For example, arctanh(x) approaches ∞ (infinity) as x'' approaches 1, so the scale stops short. In slide rule terminology "log-log" means the scale is logarithmic applied over an inherently logarithmic scale. Slide rule annotation generally ignores powers of 10. However, for some scales, such as log-log, decimal points are relevant and are likely to be marked. Gauge marks Gauge marks are often added to the scales either marking important constants (e.g. at 3.14159) or useful conversion coefficients (e.g. at 180*60*60/π or to find sine and tan of small angles). A cursor may have subsidiary hairlines beside the main one. For example, when one is over kilowatts the other indicates horsepower. See on the A and B scales and on the C scale in the detail image. The Aristo 0972 has multiple cursor hairlines on its reverse side, as shown in the image above. Notes References Citations Works cited Further reading Analog computers Historical scientific instruments Mechanical calculators Logarithms Logarithmic scales of measurement
Slide rule scale
[ "Physics", "Mathematics" ]
1,209
[ "Logarithms", "Physical quantities", "Quantity", "E (mathematical constant)", "Logarithmic scales of measurement" ]
68,027,732
https://en.wikipedia.org/wiki/Ocean%20dynamical%20thermostat
Ocean dynamical thermostat is a physical mechanism through which changes in the mean radiative forcing influence the gradients of sea surface temperatures in the Pacific Ocean and the strength of the Walker circulation. Increased radiative forcing (warming) is more effective in the western Pacific than in the eastern where the upwelling of cold water masses damps the temperature change. This increases the east-west temperature gradient and strengthens the Walker circulation. Decreased radiative forcing (cooling) has the opposite effect. The process has been invoked to explain variations in the Pacific Ocean temperature gradients that correlate to insolation and climate variations. It may also be responsible for the hypothesized correlation between El Niño events and volcanic eruptions, and for changes in the temperature gradients that occurred during the 20th century. Whether the ocean dynamical thermostat controls the response of the Pacific Ocean to anthropogenic global warming is unclear, as there are competing processes at play; potentially, it could drive a La Niña-like climate tendency during initial warming before it is overridden by other processes. Background The equatorial Pacific is a key region of Earth in terms of its relative influence on the worldwide atmospheric circulation. A characteristic east-west temperature gradient is coupled to an atmospheric circulation, the Walker circulation, and further controlled by atmospheric and oceanic dynamics. The western Pacific features the so-called "warm pool", where the warmest sea surface temperatures (SSTs) of Earth are found. In the eastern Pacific conversely an area called the "cold tongue" is always colder than the warm pool even though they lie at the same latitude, as cold water is upwelled there. The temperature gradient between the two in turn induces an atmospheric circulation, the Walker circulation, which responds strongly to the SST gradient. One important component of the climate is the El Niño-Southern Oscillation (ENSO), a mode of climate variability. During its positive/El Niño phase, waters in the central and eastern Pacific are warmer than normal while during its cold/La Niña they are colder than normal. Coupled to these SST changes the atmospheric pressure difference between the eastern and western Pacific changes. ENSO and Walker circulation variations have worldwide effects on weather, including natural disasters such as bushfires, droughts, floods and tropical cyclone activity. The atmospheric circulation modulates the heat uptake by the ocean, the strength and position of the Intertropical Convergence Zone (ITCZ), tropical precipitation and the strength of the Indian monsoon. Original hypothesis by Clement et al.(1996) and Sun and Liu's (1996) precedent Already in May 1996 Sun and Liu published a hypothesis that coupled interactions between ocean winds, the ocean surface and ocean currents can limit water temperatures in the western Pacific. As part of that study, they found that increased equilibrium temperatures drive an increased temperature gradient between the eastern and western Pacific. The ocean dynamical thermostat mechanism was described in a dedicated publication by Clement et al. 1996 in a coupled ocean-atmosphere model of the equatorial ocean. Since in the western Pacific SSTs are only governed by stored heat and heat fluxes, while in the eastern Pacific the horizontal and vertical advection also play a role. Thus an imposed source of heating primarily warms the western Pacific, inducing stronger easterly winds that facilitate upwelling in the eastern Pacific and cool its temperature - a pattern opposite that expected from the heating. Cold water upwelled along the equator then spreads away from it, reducing the total warming of the basin. The temperature gradient between the western and eastern Pacific thus increases, strengthening the trade winds and further increasing upwelling; this eventually results in a climate state resembling La Niña. The mechanism is seasonal as upwelling is least effective in boreal spring and most effective in boreal autumn; thus it is mainly operative in autumn. Due to the vertical temperature structure, ENSO variability becomes more regular during cooling by the thermostat mechanism, but is damped during warming. The model of Clement et al. 1996 only considers temperature anomalies and does not account for the entire energy budget. After some time, warming would spread to the source regions of the upwelled water and in the thermocline, eventually damping the thermostat. The principal flaw in the model is that it assumes that the temperature of the upwelled water does not change over time. Later research Later studies have verified the ocean dynamical thermostat mechanism for a number of climate models with different structures of warming and also the occurrence of the opposite response - a decline in the SST gradient - in response to climate cooling. In fully coupled models a tendency of the atmospheric circulation to intensify with decreasing insolation sometimes negates the thermostat response to decreased solar activity. Liu, Lu and Xie 2015 proposed that an ocean dynamical thermostat can also operate in the Indian Ocean, and the concept has been extended to cover the Indo-Pacific as a whole rather than just the equatorial Pacific. Water flows from the western Pacific into the Indian Ocean through straits between Australia and Asia, a phenomenon known as the Indonesian Throughflow. Rodgers et al. 1999 postulated that stronger trade winds associated with the ocean dynamical thermostat may increase the sea level difference between the Indian and Pacific oceans, increasing the throughflow and cooling the Pacific further. An et al. 2022 postulated a similar effect in the Indian Ocean could force changes to the Indian Ocean Dipole after carbon dioxide removal. Role in climate variability The ocean dynamical thermostat has been used to explain: The observation that during Marine isotope stage 3, cooling in Greenland is associated with El Niño-like climate change in the Pacific. The decline in ENSO variability during periods with high solar variability. The transition to a cold Interdecadal Pacific Oscillation at the end of the 20th century. Volcanic and solar influences The ocean dynamical thermostat mechanism has been invoked to link volcanic eruptions to ENSO changes. Volcanic eruptions can cool the Earth by injecting aerosols and sulfur dioxide into the stratosphere, which reflect incoming solar radiation. It has been suggested that in paleoclimate records volcanic eruptions are often followed by El Niño events, but it is questionable whether this applies to known historical eruptions and results from climate modelling are equivocal. In some climate models an ocean dynamical thermostat process causes the onset of El Niño events after volcanic eruptions, in others additional atmospheric processes override the effect of the ocean dynamical thermostat on Pacific SST gradients. The ocean dynamical thermostat process may explain variations in Pacific SSTs in the eastern Pacific that correlate to insolation changes such as the Dalton Minimum and to the solar cycle. During the early and middle Holocene when autumn and summer insolation was increased, but also during the Medieval Climate Anomaly between 900-1300 AD, SSTs off Baja California in the eastern Pacific were colder than usual. Southwestern North America underwent severe megadroughts during this time, which could also relate to a La Niña-like tendency in Pacific SSTs. Conversely, during periods of low insolation and during the Little Ice Age SSTs increased. This region lies within the California Current which is influenced by the eastern Pacific that controls the temperature of upwelled water. This was further corroborated by analyses with additional foraminifera species. Increased productivity in the ocean waters off Peru during the Medieval Climate Anomaly and the Roman Warm Period between 50-400 AD, when the worldwide climate was warmer, may occur through a thermostat-driven shallowing of the thermocline and increased upwelling of nutrient-rich waters. Additional mechanisms connecting the equatorial Pacific climate to insolation changes have been proposed, however. Role in recent climate change Changes in equatorial Pacific SSTs caused by anthropogenic global warming are an important problem in climate forecasts, as they influence local and global climate patterns. The ocean dynamical thermostat mechanism is expected to reduce the anthropogenic warming of the eastern Pacific relative to the western Pacific, thus strengthening the SST gradient and the Walker circulation. This is opposed by a weakening of the Walker circulation and the more effective evaporative cooling of the western Pacific under global warming. This compensation between different effects makes it difficult to estimate the eventual outcome of the Walker circulation and SST gradient. In CMIP5 models it is usually not the dominating effect. The ocean dynamical thermostat has been invoked to explain contradictory changes in the Pacific Ocean in the 20th century. Specifically, there appears to be a simultaneous increase of the SST gradient, but also a weakening of the Walker circulation especially during boreal summer. All these observations are uncertain, owing to the particular choices of metrics used to describe SST gradients and Walker circulation strength, as well as measurement issues and biases. However, the ocean dynamical thermostat mechanism could explain why the SST gradient has increased during global warming and also why Walker circulation becomes stronger in autumn and winter, as these are the seasons when upwelling is strongest. On the other hand, warming in the Atlantic Ocean and more generally changes in between-ocean temperature gradients may play a role. Projected future changes Climate models usually depict an El Niño-like change, that is a decrease in the SST gradient. In numerous models, there is a time-dependent pattern with an initial increase in the SST gradient ("fast response") followed by a weakening of the gradient ("slow response") especially but not only in the case of abrupt increases of greenhouse gas concentrations. This may reflect a decreasing strength of the ocean dynamical thermostat with increasing warming and the warming of the upwelled water, which occurs with a delay of a few decades after the surface warming and is known as the "oceanic tunnel". On the other hand, climate models might underestimate the strength of the thermostat effect. According to An and Im 2014, in an oceanic dynamical model a doubling of carbon dioxide concentrations initially cools the eastern Pacific cold tongue, but a further increase in carbon dioxide concentrations eventually causes the cooling to stop and the cold tongue to shrink. Their model does not consider changes in the thermocline temperature, which would tend to occur after over a decade of global warming. According to Luo et al. 2017, the ocean dynamical thermostat eventually is overwhelmed first by a weakening of the trade winds and increased ocean stratification which decrease the supply of cold water to the upwelling zones, and second by the arrival of warmer subtropical waters there. In their model, the transition takes about a decade. According to Heede, Fedorov and Burls 2020, the greater climate warming outside of the tropics than inside of them eventually causes the water arriving to the upwelling regions to warm and the oceanic currents that transport it to weaken. This negates the thermostat effect after about two decades in the case of an abrupt increase of greenhouse gas concentrations, and after about half to one century when greenhouse gas concentrations are increasing more slowly. With further warming of the subsurface ocean, the strength of the ocean dynamical thermostat is expected to decline, because the decreasing stratification means that momentum is less concentrated in the surface layer and thus upwelling decreases. According to Heede and Fedorov 2021, in some climate models the thermostat mechanism initially prevails over other mechanisms and causes a cooling of the subtropical and central Pacific. Eventually most models converge to an equatorial warming pattern. Zhou et al. 2022 found that in carbon dioxide removal scenarios, the thermostat amplifies precipitation changes. Zheng et al. 2024 attributed changes of SST seasonality with global warming to the thermostat effect. Other contexts The term "ocean dynamical thermostat" has also been used in slightly different contexts: The interaction between a weakening Walker circulation and the Equatorial Undercurrent. Specifically, weaker easterly winds in the Pacific reduce the braking of the Undercurrent, thus accelerating it. This process dominates over the decrease in the eastward counterflow of the Undercurrent. Thus, a weaker Walker circulation can increase the flow of the Undercurrent and thus upwelling in the eastern Pacific, cooling it. Coupled general circulation models often do not depict this response of the Undercurrent and SST gradients correctly; the former may be the cause of the widespread underestimate of the SST gradients in these models. Stronger winds drive evaporative cooling of tropical SST. According to Heede, Fedorov and Burls 2020, in response to abrupt increases in greenhouse gas concentrations weak mean climatological winds allow the Indian Ocean to heat up more than the Pacific Ocean. This tends to induce stronger easterly winds over the Pacific which further dampen the warming in the Pacific Ocean. Unlike the ocean dynamical thermostat however this cooling effect is concentrated in the central-eastern Pacific, while westerly winds induced by warming over South America cause the eastern Pacific to warm. Notes References Sources External links Tropical meteorology Effects of climate change El Niño-Southern Oscillation events Physical oceanography
Ocean dynamical thermostat
[ "Physics" ]
2,726
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
68,030,130
https://en.wikipedia.org/wiki/Interference%20freedom
In computer science, interference freedom is a technique for proving partial correctness of concurrent programs with shared variables. Hoare logic had been introduced earlier to prove correctness of sequential programs. In her PhD thesis (and papers arising from it ) under advisor David Gries, Susan Owicki extended this work to apply to concurrent programs. Concurrent programming had been in use since the mid 1960s for coding operating systems as sets of concurrent processes (see, in particular, Dijkstra.), but there was no formal mechanism for proving correctness. Reasoning about interleaved execution sequences of the individual processes was difficult, was error prone, and didn't scale up. Interference freedom applies to proofs instead of execution sequences; one shows that execution of one process cannot interfere with the correctness proof of another process. A range of intricate concurrent programs have been proved correct using interference freedom, and interference freedom provides the basis for much of the ensuing work on developing concurrent programs with shared variables and proving them correct. The Owicki-Gries paper An axiomatic proof technique for parallel programs I received the 1977 ACM Award for best paper in programming languages and systems. Note. Lamport presents a similar idea. He writes, "After writing the initial version of this paper, we learned of the recent work of Owicki." His paper has not received as much attention as Owicki-Gries, perhaps because it used flow charts instead of the text of programming constructs like the if statement and while loop. Lamport was generalizing Floyd's method while Owicki-Gries was generalizing Hoare's method. Essentially all later work in this area uses text and not flow charts. Another difference is mentioned below in the section on Auxiliary variables. Dijkstra's Principle of non-interference Edsger W. Dijkstra introduced the principle of non-interference in EWD 117, "Programming Considered as a Human Activity", written about 1965. This principle states that: The correctness of the whole can be established by taking into account only the exterior specifications (abbreviated specs throughout) of the parts, and not their interior construction. Dijkstra outlined the general steps in using this principle: Give a complete spec of each individual part. Check that the total problem is solved when program parts meeting their specs are available. Construct the individual parts to satisfy their specs, but independent of one another and the context in which they will be used. He gave several examples of this principle outside of programming. But its use in programming is a main concern. For example, a programmer using a method (subroutine, function, etc.) should rely only on its spec to determine what it does and how to call it, and never on its implementation. Program specs are written in Hoare logic, introduced by Sir Tony Hoare, as exemplified in the specs of processes and : }} }} Meaning: If execution of in a state in which precondition is true terminates, then upon termination, postcondition is true. Now consider concurrent programming with shared variables. The specs of two (or more) processes and are given in terms of their pre- and post-conditions, and we assume that implementations of and are given that satisfy their specs. But when executing their implementations in parallel, since they share variables, a race condition can occur; one process changes a shared variable to a value that is not anticipated in the proof of the other process, so the other process does not work as intended. Thus, Dijkstra's Principle of non-interference is violated. In her PhD thesis of 1975 in Computer Science, Cornell University, written under advisor David Gries, Susan Owicki developed the notion of interference freedom. If processes and satisfy interference freedom, then their parallel execution will work as planned. Dijkstra called this work the first significant step toward applying Hoare logic to concurrent processes. To simplify discussions, we restrict attention to only two concurrent processes, although Owicki-Gries allows more. Interference freedom in terms of proof outlines Owicki-Gries introduced the proof outline for a Hoare triple }. It contains all details needed for a proof of correctness of } using the axioms and inference rules of Hoare logic. (This work uses the assignment statement , and statements, and the loop.) Hoare alluded to proof outlines in his early work; for interference freedom, it had to be formalized. A proof outline for } begins with precondition and ends with postcondition . Two assertions within braces { and } appearing next to each other indicates that the first must imply the second. Example: A proof outline for } where is: } } } } } } } } } must hold, where stands for with every occurrence of replaced by . (In this example, and are basic statements, like an assignment statement, skip, or an await statement.) Each statement in the proof outline is preceded by a precondition and followed by a postcondition , and } must be provable using some axiom or inference rule of Hoare logic. Thus, the proof outline contains all the information necessary to prove that } is correct. Now consider two processes and executing in parallel, and their specs: }} }} Proving that they work suitably in parallel will require restricting them as follows. Each expression in or may refer to at most one variable that can be changed by the other process while is being evaluated, and may refer to at most once. A similar restriction holds for assignment statements . With this convention, the only indivisible action need be the memory reference. For example, suppose process references variable while changes . The value receives for must be the value before or after changes , and not some spurious in-between value. Definition of Interference-free The important innovation of Owicki-Gries was to define what it means for a statement not to interfere with the proof of }. If execution of cannot falsify any assertion given in the proof outline of }, then that proof still holds even in the face of concurrent execution of and . Definition. Statement with precondition does not interfere with the proof of } if two conditions hold: (1) } (2) Let be any statement within but not within an statement (see later section). Then }. Read the last Hoare triple like this: If the state is such that both and can be executed, then execution of is not going to falsify . Definition. Proof outlines for } and } are interference-free if the following holds. Let be an or assignment statement (that does not appear in an ) of process . Then does not interfere with the proof of }. Similarly for of process and }. Statements cobegin and await Two statements were introduced to deal with concurrency. Execution of the statement executes and in parallel. It terminates when both and have terminated. Execution of the statement is delayed until condition is true. Then, statement is executed as an indivisible action—evaluation of is part of that indivisible action. If two processes are waiting for the same condition , when it becomes true, one of them continues waiting while the other proceeds. The statement cannot be implemented efficiently and is not proposed to be inserted into the programming language. Rather it provides a means of representing several standard primitives such as semaphores—first express the semaphore operations as , then apply the techniques described here. Inference rules for and are: Auxiliary variables An auxiliary variable does not occur in the program but is introduced in the proof of correctness to make reasoning simpler —or even possible. Auxiliary variables are used only in assignments to auxiliary variables, so their introduction neither alters the program for any input nor affects the values of program variables. Typically, they are used either as program counters or to record histories of a computation. Definition. Let be a set of variables that appear in only in assignments , where is in . Then is an auxiliary variable set for . Since a set of auxiliary variables are used only in assignments to variables in , deleting all assignments to them doesn't change the program's correctness, and we have the inference rule elimination: is an auxiliary variable set for . The variables in do not occur in or . is obtained from by deleting all assignments to the variables in . Instead of using auxiliary variables, one can introduce a program counter into the proof system, but that adds complexity to the proof system. Note: Apt discusses the Owicki-Gries logic in the context of recursive assertions, that is, effectively computable assertions. He proves that all the assertions in proof outlines can be recursive, but that this is no longer the case if auxiliary variables are used only as program counters and not to record histories of computation. Lamport, in his similar work, uses assertions about token positions instead of auxiliary variables, where a token on an edge of a flow chart is akin to a program counter. There is no notion of a history variable. This indicates that Owicki-Gries and Lamport's approach are not equivalent when restricted to recursive assertions. Deadlock and termination Owicki-Gries deals mainly with partial correctness: } means: If executed in a state in which is true terminates, then is true of the state upon termination. However, Owicki-Gries also gives some practical techniques that use information obtained from a partial correctness proof to derive other correctness properties, including freedom from deadlock, program termination, and mutual exclusion. A program is in deadlock if all processes that have not terminated are executing statements and none can proceed because their conditions are false. Owicki-Gries provides conditions under which deadlock cannot occur. Owicki-Gries presents an inference rule for total correctness of the while loop. It uses a bound function that decreases with each iteration and is positive as long as the loop condition is true. Apt et al show that this new inference rule does not satisfy interference freedom. The fact that the bound function is positive as long as the loop condition is true was not included in an interference test. They show two ways to rectify this mistake. A simple example Consider the statement: // The proof outline for it: // Proving that does not interfere with the proof of requires proving two Hoare triples: (1) (2) The precondition of (1) reduces to and the precondition of (2) reduces to . From this, it is easy to see that these Hoare triples hold. Two similar Hoare triples are required to show that does not interfere with the proof of . Suppose is changed from the statement to the assignment . Then the proof outline does not satisfy the requirements, because the assignment contains two occurrences of shared variable . Indeed, the value of after execution of the statement could be 2 or 3. Suppose is changed to the statement , so it is the same as . After execution of , should be 4. To prove this, because the two assignments are the same, two auxiliary variables are needed, one to indicate whether has been executed; the other, whether has been executed. We leave the change in the proof outline to the reader. Examples of formally proved concurrent programs A. Findpos. Write a program that finds the first positive element of an array (if there is one). One process checks all array elements at even positions of the array and terminates when it finds a positive value or when none is found. Similarly, the other process checks array elements at odd positions of the array. Thus, this example deals with while loops. It also has no statements. This example comes from Barry K. Rosen. The solution in Owicki-Gries, complete with program, proof outline, and discussion of interference freedom, takes less than two pages. Interference freedom is quite easy to check, since there is only one shared variable. In contrast, Rosen's article uses as the single, running example in this 24 page paper. An outline of both processes in a general environment: // B. Bounded buffer consumer/producer problem. A producer process generates values and puts them into bounded buffer of size ; a consumer process removes them. They proceed at variable rates. The producer must wait if buffer is full; the consumer must wait if buffer is empty. In Owicki-Gries, a solution in a general environment is shown; it is then embedded in a program that copies an array into an array . This example exhibits a principle to reduce interference checks to a minimum: Place as much as possible in an assertion that is invariantly true everywhere in both processes. In this case the assertion is the definition of the bounded buffer and bounds on variables that indicate how many values have been added to and removed from the buffer. Besides buffer itself, two shared variables record the number of values added to the buffer and the number removed from the buffer. C. Implementing semaphores. In his article on the THE multiprogramming system, Dijkstra introduces the semaphore as a synchronization primitive: is an integer variable that can be referenced in only two ways, shown below; each is an indivisible operation: 1. : Decrease by 1. If now , suspend the process and put it on a list of suspended processes associated with . 2. : Increase by 1. If now , remove one of the processes from the list of suspended processes associated with , so its dynamic progress is again permissible. The implementation of and using statements is: Here, is an array of processes that are waiting because they have been suspended; initially, for every process . One could change the implementation to always waken the longest suspended process. D. On-the-fly garbage collection. At the 1975 Summer School Marktoberdorf, Dijkstra discussed an on-the-fly garbage collector as an exercise in understanding parallelism. The data structure used in a conventional implementation of LISP is a directed graph in which each node has at most two outgoing edges, either of which may be missing: an outgoing left edge and an outgoing right edge. All nodes of the graph must be reachable from a known root. Changing a node may result in unreachable nodes, which can no longer be used and are called garbage. An on-the-fly garbage collector has two processes: the program itself and a garbage collector, whose task is to identify garbage nodes and put them on a free list so that they can be used again. Gries felt that interference freedom could be used to prove the on-the-fly garbage collector correct. With help from Dijkstra and Hoare, he was able to give a presentation at the end of the Summer School, which resulted in an article in CACM. E. Verification of readers/writers solution with semaphores. Courtois et al use semaphores to give two versions of the readers/writers problem, without proof. Write operations block both reads and writes, but read operations can occur in parallel. Owicki provides a proof. F. Peterson's algorithm, a solution to the 2-process mutual exclusion problem, was published by Peterson in a 2-page article. Schneider and Andrews provide a correctness proof. Dependencies on interference freedom The image below, by Ilya Sergey, depicts the flow of ideas that have been implemented in logics that deal with concurrency. At the root is interference freedom. The file contains references. Below, we summarize the major advances. Rely-Guarantee. 1981. Interference freedom is not compositional. Cliff Jones recovers compositionality by abstracting interference into two new predicates in a spec: a rely-condition records what interference a thread must be able to tolerate and a guarantee-condition sets an upper bound on the interference that the thread can inflict on its sibling threads. Xu et al observe that Rely-Guarantee is a reformulation of interference freedom; revealing the connection between these two methods, they say, offers a deep understanding about verification of shared variable programs. CSL. 2004. Separation logic supports local reasoning, whereby specifications and proofs of a program component mention only the portion of memory used by the component. Concurrent separation logic (CSL) was originally proposed by Peter O'Hearn, We quote from: "the Owicki-Gries method involves explicit checking of non-interference between program components, while our system rules out interference in an implicit way, by the nature of the way that proofs are constructed." Deriving concurrent programs. 2005-2007. Feijen and van Gasteren show how to use Owicki-Gries to design concurrent programs, but the lack of a theory of progress means that designs are driven only by safety requirements. Dongol, Goldson, Mooij, and Hayes have extended this work to include a "logic of progress" based on Chandy and Misra's language Unity, molded to fit a sequential programming model. Dongel and Goldson describe their logic of progress. Goldson and Dongol show how this logic is used to improve the process of designing programs, using Dekker's algorithm for two processes as an example. Dongol and Mooij present more techniques for deriving programs, using Peterson's mutual exclusion algorithm as one example. Dongol and Mooij show how to reduce the calculational overhead in formal proofs and derivations and derive Dekker's algorithm again, leading to some new and simpler variants of the algorithm. Mooij studies calculational rules for Unity's leads-to relation. Finally, Dongol and Hayes provide a theoretical basis for and prove soundness of the process logic. OGRA. 2015. Lahav and Vafeiadis strengthen the interference freedom check to produce (we quote from the abstract) "OGRA, a program logic that is sound for reasoning about programs in the release-acquire fragment of the C11 memory model." They provide several examples of its use, including an implementation of the RCU synchronization primitives. Quantum programming. 2018. Ying et al extend interference freedom to quantum programming. Difficulties they face include intertwined nondeterminism: nondeterminism involving quantum measurements and nondeterminism introduced by parallelism occurring at the same time. The authors formally verify Bravyi-Gosset-König's parallel quantum algorithm solving a linear algebra problem, giving, they say, for the first time an unconditional proof of a computational quantum advantage. POG. 2020. Raad et al present POG (Persistent Owicki-Gries), the first program logic for reasoning about non-volatile memory technologies, specifically the Intel-x86. Texts that discuss interference freedom On A Method of Multiprogramming, 1999. Van Gasteren and Feijen discuss the formal development of concurrent programs entirely on the idea of interference freedom. On Current Programming, 1997. Schneider uses interference freedom as the main tool in developing and proving concurrent programs. A connection to temporal logic is given, so arbitrary safety and liveness properties can be proven. Control predicates obviate the need for auxiliary variables for reasoning about program counters. Verification of Sequential and Concurrent Programs, 1991, 2009. This first text to cover verification of structured concurrent programs, by Apt et al, has gone through several editions over several decades. Concurrency Verification: Introduction to Compositional and Non-Compositional Methods, 2112. De Roever et al provide a systematic and comprehensive introduction to compositional and non-compositional proof methods for the state-based verification of concurrent programs Implementations of interference freedom 1999: Nipkow and Nieto present the first formalization of interference freedom and its compositional version, the rely-guarantee method, in a theorem prover: Isabelle/HOL. 2005: Ábrahám's PhD thesis provides a way to prove multithreaded Java programs correct in three steps: (1) Annotate the program to produce a proof outline, (2) Use their tool Verger to automatically create verification conditions, and (3) Use the theorem prover PVS to prove the verification conditions interactively. 2017: Denissen reports on an implementation of Owicki-Gries in the "verification ready" programming language Dafny. Denissen remarks on the ease of use of Dafny and his extension to it, making it extremely suitable when teaching students about interference freedom. Its simplicity and intuitiveness outweighs the drawback of being non-compositional. He lists some twenty institutions that teach interference freedom. 2017: Amani et al combine the approaches of Hoare-Parallel, a formalisation of Owicki-Gries in Isabelle/HOL for a simple while-language, and SIMPL, a generic language embedded in Isabelle/HOL, to allow formal reasoning on C programs. 2022: Dalvandi et al introduce the first deductive verification environment in Isabelle/HOL for C11-like weak memory programs, building on Nipkow and Nieto's encoding of Owicki–Gries in the Isabelle theorem prover. 2022: This webpage describes the Civl verifier for concurrent programs and gives instructions for installing it on your computer. It is built on top of Boogie, a verifier for sequential programs. Kragl et al describe how interference freedom is achieved in Civl using their new specification idiom, yield invariants. One can also use specs in the rely-guarantee style. Civl offers a combination of linear typing and logic that allows economical and local reasoning about disjointness (like separation logic). Civl is the first system that offers refinement reasoning on structured concurrent programs. 2022. Esen and Rümmer developed TRICERA, an automated open-source verification tool for C programs. It is based on the concept of constrained Horn clauses, and it handles programs operating on the heap using a theory of heaps. A web interface to try it online is available. To handle concurrency, TRICERA uses a variant of the Owicki-Gries proof rules, with explicit variables to added to represent time and clocks. References Formal methods Program logic Logic in computer science
Interference freedom
[ "Mathematics", "Engineering" ]
4,556
[ "Software engineering", "Mathematical logic", "Logic in computer science", "Formal methods" ]
68,031,158
https://en.wikipedia.org/wiki/%CE%91-Aminoadipic%20acid
α-Aminoadipic acid is one of the metabolic precursor in the biosynthesis of lysine through α-aminoadipate pathway. Its conjugate base is α-aminoadipate, which is the prevalent form at physiological pH. α-Aminoadipic acid has a stereogenic center and can appear in two enantiomers, L-α-aminoadipate and D-α-aminoadipate. The L-enantiomer appears during lysine biosynthesis and degradation, whereas the D-enantiomer is a part of certain antibiotics. Metabolism Lysine degradation Through saccharopine and allysine, lysine is converted to α-aminoadipate, which is then degraded all the way to acetoacetate. Allysine is oxidized by aminoadipate-semialdehyde dehydrogenase: allysine + NAD(P)+ ↔ α-aminoadipate NAD(P)H + H+ α-Aminoadipate is then transaminated with α-ketoglutarate to give α-ketoadipate and glutamate, respectively, by the action of 2-aminoadipate transaminase: α-aminoadipate + α-ketoglutarate ↔ α-ketoadipate + glutamate Lysine biosynthesis α-Aminoadipate appears during biosynthesis of lysine in several yeast species, fungi, and certain protists. During this pathway, which is named after α-aminoadipate, the same steps are repeated in the opposite order as in the degradation reactions, namely, α-ketoadipate is transaminated to α-aminoadipate, which is then reduced to allysine, allysine couples with glutamate to give saccharopine, which is then cleaved to give lysine. Importance A 2013 study identified α-aminoadipate as a novel predictor of the development of diabetes and suggested that it is a potential modulator of glucose homeostasis in humans. D-α-Aminoadipic acid is a part of the antibiotic cephalosporin C. References Amino acids Dicarboxylic acids
Α-Aminoadipic acid
[ "Chemistry" ]
468
[ "Amino acids", "Biomolecules by chemical classification" ]
68,032,094
https://en.wikipedia.org/wiki/Rhodothermus
Rhodothermus is a genus of bacteria. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) See also List of bacterial orders List of bacteria genera References Bacteria genera Rhodothermota
Rhodothermus
[ "Biology" ]
70
[ "Bacteria stubs", "Bacteria" ]
68,032,108
https://en.wikipedia.org/wiki/Rhodothermaceae
The Rhodothermaceae are a family of bacteria. See also List of bacterial orders List of bacteria genera References Bacteria families Rhodothermota
Rhodothermaceae
[ "Biology" ]
35
[ "Bacteria stubs", "Bacteria" ]
72,414,976
https://en.wikipedia.org/wiki/Biodesulfurization
Biodesulfurization is the process of removing sulfur from crude oil through the use of microorganisms or their enzymes. Background Crude oil contains sulfur in its composition, with the latter being the most abundant element after carbon and hydrogen. Depending on its source, the amount of sulfur present in crude oil can range from 0.05 to 10%. Accordingly, the oil can be classified as sweet or sour if the sulfur concentration is below or above 0.5%, respectively. The combustion of crude oil releases sulfur oxides (SOx) to the atmosphere, which are harmful to public health and contribute to serious environmental effects such as air pollution and acid rains. In addition, the sulfur content in crude oil is a major problem for refineries, as it promotes the corrosion of the equipment and the poisoning of the noble metal catalysts. The levels of sulfur in any oil field are too high for the fossil fuels derived from it (such as gasoline, diesel, or jet fuel ) to be used in combustion engines without pre-treatment to remove organosulfur compounds. The reduction of the concentration of sulfur in crude oil becomes necessary to mitigate one of the leading sources of the harmful health and environmental effects caused by its combustion. In this sense, the European Union has taken steps to decrease the sulfur content in diesel below 10 ppm, while the US has made efforts to restrict the sulfur content in diesel and gasoline to a maximum of 15 ppm. The reduction of sulfur compounds in oil fuels can be achieved by a process named desulfurization. Methods used for desulfurization include, among others, hydrodesulfurization, oxidative desulfurization, extractive desulfurization, and extraction by ionic liquids. Despite their efficiency at reducing sulfur content, the conventional desulfurization methods are still accountable for a significant amount of the CO2 emissions associated with the crude oil refining process, releasing up to 9000 metric tons per year. Furthermore, these processes usually require large amounts of energy, and are accompanied by massive costs for the industries that employ them. A greener and also complementary alternative process to the conventional desulfurization methods is biodesulfurization. Biodesulfurization implementation and pathways It has been observed that there are sulfur-dependent bacteria that make use of the sulfur in sulfur-containing compounds in their life cycles (either in their growth or metabolic processes), producing molecules with lower/no content in sulfur. In particular, heteroaromatic compounds, namely thiophenes and their derivatives, were observed to constitute important substrates for bacteria. Biodesulfurization is an attractive alternative to sulfur removal, particularly in the crude oil fractions where there is an abundance of sulfur heterocycles. To date, pilot attempts for industrial applications have resorted to the use of whole bacterial systems, because biodesulfurization involves a sequential cascade of reactions by different enzymes and a large amount of cofactors participating in redox reactions either with the sulfur atom or molecular oxygen. However, they lacked the scalability desired for an industrial setup due to overall low enzyme efficiency, product feedback inhibition mechanisms and toxicity, or inadequate conditions for long-term bacterial growth. While cell-free recombinant enzymes would be desirable, known implementations are still well below the efficiency met for whole-cell ones. There are two main pathways through which bacteria remove sulfur from sulfur-containing compounds: ring destructive pathways and sulfur-specific pathways. The ring destructive pathway consists of the selective cleavage of carbon-carbon bonds with release of small organic sulfides soluble in the surrounding aqueous environment, whereas the sulfur-specific pathways rely on successive sulfur redox reactions to release sulfur either as sulfide or sulfite anions as byproducts. The latter have thus been considered as a very promising pathway to produce sulfur-free compounds with a high calorific content, in particular in the desulfurization of sulfur heterocycles abundant in sour crude oil fractions. The most studied ring destructive pathway is the Kodama pathway and it was initially identified in Pseudomonas abikonensis and Pseudomonas jijani. The pathway comprises four main steps: i) the successive hydroxylation by NADH-dependent dioxygenases of the carbons in one of the aromatic rings, followed by ii) the dehydrogenation of the ring by a NAD+ cofactor and further iii) oxygenation promoting ring cleavage and formation of a pyruvyl branch; concluding with iv) the hydrolysis of the pyruvyl substituent to release pyruvate and the remaining of the substrate. Since the end products of the pathway are still water soluble sulfur compounds, the pathway has often been disregarded as an appealing pathway for industrial applications, in particular by the oil industry. The most well-studied sulfur specific pathway is the 4S pathway, first discovered in the bacterium Rhodococcus erythropolis (strain IGTS8), which was observed to remove sulfur from dibenzothiophenes and derivatives in three steps: i) a double oxidation of the sulfur (to sulfoxide and sulfone) performed by a flavin-dependent monoxygenase, followed by  ii) a carbon-sulfur bond cleavage by a second flavin-dependent monoxygenase and a iii) desulfination reaction through which 2-hydroxybiphenyl and sulfite are produced. In total, four enzymes are required for the process: three of which are encoded in the dszABC genes (the flavin-dependent monoxygenases DszA and DszC, and the desulfinase DszB) and fourth chromosome encoded enzyme, DszD, which is responsible for the regeneration and supply of the flavin mononucleotide cofactor required for DszA and DszC. It has also been observed that some anaerobic bacteria can use an alternative sulfur-specific pathway to produce hydrogen sulfide instead. However, to date, the desulfurization of fractions such as bitumen, vacuum gas oil, or deasphalted oil has not been observed The aerobic 4S pathway The 4S pathway is a sulfur-specific metabolic pathway of oxidative desulfurization that converts dibenzothiophene (DBT) into 2-hydroxybiphenyl and sulfite. It uses a total of four NADH molecules (three required by DszD to generate FMNH2 and a fourth to regenerate the FMN-oxide byproduct of DszA) and three molecules of oxygen, thus producing NAD+ and water as byproducts. DszC is the first enzyme to intervene in the pathway in two sequential steps, catalyzing the double oxidation of DBT first into DBT-sulfoxide and then into DBT-sulfone. It requires FMNH2 as cofactor, which is supplied by DszD, and molecular oxygen. For that reason, the efficiency of this enzyme is dependent on the activity of DszD and on environmental oxygenation. The reaction catalyzed by DszC involves three phases: 1) molecular oxygen activation leading to the formation of a hydroperoxyflavin-intermediate (C4aOOH); 2) oxidation of DBT to DBTO; and 3) dehydration of FMN. DszC is the second least efficient enzyme in the pathway with a particularly low kcat of 1.6 ± 0.3 min−1. It is also severely affected from feedback inhibition caused mostly by HPBS and 2-HBP, the products of DszA and DszB respectively, For that reason, it has been targeted for optimization through enzyme engineering. DszA is responsible for the third step of the pathway. It catalyzes the first carbon-sulfur bond cleavage, converting DBT-sulfone into 2-hydroxybiphenyl-2-sulfinate. Like DszC, DszA also requires FMNH2 provided by DszD and molecular oxygen for its catalytic cycle. Nonetheless, the reaction rate of DszA is about seven times faster than DszC. However, like DszC, it suffers feedback inhibition by the final product of the pathway, 2-HBP. At last, the desulfinase (DszB) cleaves the remaining carbon-sulfur bond in 2-hydroxybiphenyl-2-sulfinate converting it into the sulfur-free 2-hydroxybiphenyl in a two step mechanism. In the first, and rate-limiting, step, 2-hydroxybiphenyl-2-sulfinate is protonated by Cys27 in its electrophilic carbon leading to the cleavage of the carbon-sulfur bond and displacement of SO2. In the second step, a water molecule is deprotonated by Cys27 followed by the hydroxide attack to SO2 forming HSO3−. DszB is the least efficient enzyme on the pathway making it an appealing target for enhancement through protein engineering. The NADH-FMN oxidoreductase (DszD) regenerates the FMNH2 cofactor needed for the reactions catalyzed by DszC and DszA, through the oxidation of NADH to NAD+ in a two step mechanism. The first step corresponds to a hydride transfer from the nicotinamide moiety of NADH to the central nitrogen in the isoalloxazine moiety of the oxidized FMN forming FMNH. In the second step, a water molecule protonates the N1 atom of FMNH giving FMNH2. Engineering of 4S pathway enzymes The desulfurization rate for the wild-type 4S pathway enzymes is low when compared to the rate that needs to be achieved for a viable application in the industrial sector. An increase of 500-fold on the overall rate of the pathway is the required improvement for an efficient application of this biodesulfurization method. Directed evolution, rational design or a combination of both strategies are some of the approaches that have been applied to tackle the lack of catalytic efficiency and stability of the 4S enzymes. The 4S pathway best improvement to date was obtained by a directed evolution approach in which Rhodococcus strains were transformed with a plasmid encoding a modified dsz operon (which encodes for DszA, DszB and DszC). After 40 subculturing events in a medium in which DBT was the sole sulfur source, the modified Rhodococcus strains presented a 35-fold improvement. The strong feedback inhibition of DszC was also tackled by a combination of directed evolution and rational design approach to desensitize DszC to the 4S pathway product, HBP. The bacterial strain expressing the DszC A101K mutant showed higher activity relative to the wild-type strain. Additionally docking of HBP to the protein revealed that HBP forms a π-interaction with Trp327, thus inhibiting DszC. The A101K/W327C (AKWC) double mutant revealed to be desensitized to low HBP concentrations and the bacterial strain expressing the AKWC DszC was 14-fold more efficient than the wild-type strain. DszB, the final enzyme in the pathway, is also one of the slowest with a turnover rate of 1.7 ± 0.2 min−1, becoming a major bottleneck of the 4S pathway. A computational rational design approach determined a set of mutations that could accelerate the charge transfer occurring in the active site during DszB reaction mechanism, reducing the activation energy for the reaction and potentially increasing its turnover rate. DszB's catalytic efficiency and thermostability was also addressed in an experimental mutagenesis approach, the Y63F/Q65H double mutant revealed an increase in the enzyme's thermostability without loss of catalytic efficiency. DszD has also been targeted for rate enhancing mutation on the Thr62 residue. Mutation of Thr62 by Asn and Ala residues managed to increase its activity 5- and 7-fold, respectively. A computational study demonstrated that substitutions in position 62 of DszD sequence have a major impact in the activation energy for the hydride transfer reaction from NADH to FAD. The Thr62 mutation by an Asp residue returns the lowest activation energy from all possible mutants at this position due to the stabilization effect induced by Asp negative charge. See also Desulfurization References Desulfurization Biochemical engineering
Biodesulfurization
[ "Chemistry", "Engineering", "Biology" ]
2,615
[ "Biological engineering", "Desulfurization", "Separation processes", "Chemical engineering", "Biochemical engineering", "Biochemistry" ]
72,418,328
https://en.wikipedia.org/wiki/Stable%20phosphorus%20radicals
Stable and persistent phosphorus radicals are phosphorus-centred radicals that are isolable and can exist for at least short periods of time. Radicals consisting of main group elements are often very reactive and undergo uncontrollable reactions, notably dimerization and polymerization. The common strategies for stabilising these phosphorus radicals usually include the delocalisation of the unpaired electron over a pi system or nearby electronegative atoms, and kinetic stabilisation with bulky ligands. Stable and persistent phosphorus radicals can be classified into three categories: neutral, cationic, and anionic radicals. Each of these classes involve various sub-classes, with neutral phosphorus radicals being the most extensively studied. Phosphorus exists as one isotope 31P (I = 1/2) with large hyperfine couplings relative to other spin active nuclei, making phosphorus radicals particularly attractive for spin-labelling experiments. Neutral phosphorus radicals Neutral phosphorus radicals include a large range of conformations with varying spin densities at the phosphorus. Generally, they can categorised as mono- and bi/di-radicals (also referred to as bisradicals and biradicaloids) for species containing one or two radical phosphorus centres respectively. Monoradicals In 1966, Muller et. al published the first electron paramagnetic resonance (EPR/ESR) spectra displaying evidence for the existence of phosphorus-containing radicals. Since then a variety of phosphorus monoradicals have been synthesised and isolated. Common ones include phosphinyl (R2P•), phosphonyl (R2PO•), and phosphoranyl (R4P•) radicals. Synthesis Synthetic methods for obtaining neutral phosphorus mondoradicals include photolytic reduction of trivalent phosphorus chlorides, P-P homolytic cleavage, single electron oxidation of phosphines, and cleavage of P-S or P-Se bonds. The first persistent two-coordinate phosphorus-centred radicals [(Me3Si)2N]2P• and [(Me3Si)2CH]2P• were reported in 1976 by Lappert and co-workers. They are prepared by photolysis of the corresponding three-coordinate phosphorus chlorides in toluene in the presence of an electron-rich olifin. In 2000, the Power group found that this species can be synthesised from the dissolution, melting or evaporation of the dimer. In 2001, Grützmacher et al. reported the first stable diphosphanyl radical [Mes*MeP-PMes*]• (Mes = 1,3,5-trimethylbenzene) from the reduction of the phosphonium salt [Mes*MeP-PMes*]+(O3SCF3)− in an acetonitrile solution containing tetrakis(dimethylamino)ethylene (TDE) at room temperature, yielding yellow crystals. The monomer is stable below -30 ºC in the solid state for a few days. At room temperature the species decomposes in solution and in the solid state with a half life of 30 minutes at 3 x 10−2 M. The first structurally characterised phosphorus radical [Me3SiNP(μ3-NtBu)3{μ3-Li(thf)}3X]• (X = Br, I) was synthesised by Armstrong et al. in 2004 by the oxidation of the starting material with halogens bromide or iodine in a mixture of toluene and THF at 297 K. This produces blue crystals that can be characterised by X-ray crystallography. The steric bulk of the alkyl-imido groups was identified as playing a major role in the stabilising of these radicals. In 2006, Ito et al. prepared an air tolerant and thermally stable 1,3-diphosphayclobutenyl radical. Sterically bulky phospholkyne (Mes*C≡P) is treated with 0.5 equiv of t-BuLi in THF to form a 1,3 diphosphaalkyl anion. This is reduced with iodine solution to form a red product. The species is a planar four-membered diphosphacyclobutane (C2P2) ring with the Mes* having torsional angles with the C2P2 plane. Metal stabilised radicals In 2007, Cummins et al. synthsised a phosphorus radical using nitridovanadium trisanilide metallo-ligands with similar form to Lappert, Power and co-workers' "jack-in-the-box" diphosphines. This is made by the synthesis of the radical precursor ClP[NV{N(Np)Ar}]3]2 followed by its one electron reduction with Ti[N(tBu)Ar]3 or potassium graphite to yield dark brown crystals in 77% yield. EPR data showed delocalisation of electron spin across the two 51V and one 31P nuclei. This was consistent with computation, supporting the reported resonance structures. This delocalisation across the vanadium atoms was identified as the source of stabilisation for this species due to the ease for transition metals to undergo one-electron chemistry. Cummins and co-workers postulated that the p-character of the system could be tuned by changing the metal centres. Other metals stabilised radicals have been reported by Scheer et al, and Schneider et al using ligand containing tungsten and osmium respectively. Structure and properties As previously mentioned, kinetic stabilisation through bulky ligands has been an effective strategy for producing persisting phosphorus radicals. Delocalisation of the electron has also shown a stabilising effect on phosphorus radical species. This conversely results in more delocalised spin densities, and lower coupling constants relative to 31P localised electron spin. For this reason the spin localisation on the phosphorus atom varies widely for different phosphorus radical species. Cyclic radicals like that by Ito at al have delocalisation across the rings. In this case X-ray, EPR spectroscopy, and ab initio calculations found that 80-90% of the spin was delocalised on the carbons in the C2P2 ring and the rest on the phosphorus atoms. Despite this, the aP2 constant shows similar spectroscopic property to organic radicals that contain conjugated P=C doubles bond, justifying the resonance structure used for this species. The phosphinyl radicals synthesised by Lappert and co-workers were found to be stable at room temperature for periods of over 15 days with no effect from short-term heating at 360 K. This stability was assigned to the steric bulk of the substituents and the absence of beta-hydrogen atoms. A structural study of this species conducted using X-ray crystallography, gas-phase electron diffraction, and ab initio molecular orbital calculations found that the source of this stability was not the bulkiness of the CH(SiMe3)2 ligands but the release of strain energy during homolytic cleavage at the P-P bond of the dimer that favoured the existence of the radical. The dimer shows a syn,anti conformation, which allows for better packing but has excessive crowding at the trimethylsilyl groups, while the radical monomer displays syn,syn conformation. Theoretical calculations showed that the process of cleaving the P-P bond (endothermic), relaxation to release steric strain, and rotation about the P-C bond to yield syn,syn conformation on the monomer radical (exothermic by 67.5 kJ for each unit) is an overall exothermic process. The stability of this species can therefore be attributed to the energy release of strain energy by the reorganisation of the ligands as the dimer converts to the radical monomer. This effect have been observed in other systems containing the CH(SiMe3)2 ligand and was dubbed the "Jack-in-the-box" model. Other ligand with similar flexibility, and ability to undergo conformational changes were identified as PnR2 (Pn - P, As, Sb) and ERR'2 (E = Si, Ge, Sn; R' = bulky ligand). In 2022, Streubel and co-workers investigated the electron density distribution across centres in metal-coordinated phosphanoxyl complexes. This study showed that tungsten-containing radical complexes have small amounts of spin density on the metal nuclei while in the case of manganese and iron, the spins are purely metal-centred. Biradicals Biradicals are molecules bearing two unpaired electrons. These radicals can interact ferromagnetically (triplet), antiferromagnetically (open-shell singlet) or not interact at all (two-doublet). Biradicaloids/diradicaloids are a class of biradicals with significant radical centre interaction. Synthesis The first phosphorus biradical was reported in 2011 by T. Breweies and co-workers. The biradicaloid [P(μ-NR)]2 (R=Hyp, Ter) was synthesised by the reduction of cyclo-1,3-diphospha (III)-2,4-diazanes using [(Cp2TiCl}2] as the reducing agent. The bulky Ter (trimesitylphenyl) and Hyp (hypersilyl) substituents provide a large stabilising effect. This effect is more pronounced with Ter where the biradical is stable in inert atmospheres in the solid state for long periods of time at temperatures up to 224 C. Computational studies determined that the [P(μ-NTer)]2 radical shows an openshell singlet ground state biradical character. Villinger et al later synthesised a stable cyclopentane-1,3-diyl biradical by the insertion of CO into a P–N bond of diphosphadiazanediyl. In 2017 D. Rottschäfer et al reported a N-heterocyclic vinylindene-stabilised singlet biradicaloid phosphorus compound (iPr)CP]2 (iPr = 1,3-bis(2,6-diisopropylphenyl)imidazol-2-ylidene). Significant π-e− density is transferred to C2P2 ring. The species was found to be diamagnetic with temperature-independent NMR resonances, so can be considered a non-Kekulé molecule. Structure and properties The species by Villinger can undergo reaction with phosphaalkyne forming a five-membered P2N2C heterocycle with a P-C bridge. It can also undergo halogenation and reaction with elemental sulfur. Characterisation Phosphorus radicals are commonly characterized by EPR/ESR to elucidate the spin localisation of the radical across the radical species. Higher coupling constants are indicative of higher localisation on phosphorus nuclei. Quantum chemical calculations on these systems are also used to support this experimental data. Before the characterization by X-ray crystallography by Armstrong et al, the structure of the phosphorus centred radical [(Me3Si)2CH]2P• had been determined by electron diffraction. The diphosphanyl radical [Mes*MeP-PMes*]• had been stabilised through doping into crystals of Mes*MePPMeMes*. The radical synthesised by Armstrong et al was found to exist as a distorted PN3Li3X cube in the solid state. They found that upon dissolution in THF, this cubic structure is disrupted, leaving the species to form a solvent-separated ion pair. Phosphorus radical cations Synthesis Phosphorus radical cations are often obtained from the one-electron oxidation of diphosphinidenes and phosphalkenes. In 2010, the Bertrand group found that carbene-stabilised diphosphinidenes can undergo one-electron oxidation in toluene with Ph3C+B(C6F5)4− at room temperature in inert atmosphere to produce radical cations (Dipp=2,6-Diisopropylphenyl).  The Bertrand group reported the synthesis of [(cAAC)P2]•+ , [(NHC)P2]•+ and [(NHC)P2]++ . The EPR signal for [(cAAC)P2]•+ is a triplet of quintents, resulting form coupling to with 2 P nuclei and a small coupling with 2 N nuclei. NBO analysis showed spin delocalisation across two phosphorus atoms (0.27e each) and nitrogen atoms(0.14e each). Contrastingly, the [(NHC)P2]•+complex showed delocalisation mostly on phosphorus (0.33e and 0.44e) with little contribution of other elements. Other diradicals synthesised by the Bertrand group involved species single phosphorus atoms. These included [(TMP)P(cAAC)]•+ where spin is localised on phosphorus (67%) and [bis(carbene)-PN]•+ with spin density distributed over phosphorus (0.40e), central nitrogen atom (0.18e), and N atom of cAAC (0.19e). Treatment with this later cation with KC8 returns it to its neutral analogue.In 2003, Geoffroy et al. synthesised Mes*P•-(C(NMe2)2)+ through a one electron oxidation of a phosphaalkenes with [Cp2Fe]PF6. A solution of Mes*P•-(C(NMe2)2)+ is stable in inert atmosphere in the solid state for a few weeks and a few days in solution. Hyperfine couplings on EPR show strong localisation of the spin to the phosphorus nuclei (0.75e in p orbital). In 2015, the Wang group was able to isolate the crystal structure of this species with use of the oxidant of a weakly coordinating anion Ag[Al(ORF)4]−. The electron spin density, found by EPR, resides principally on phosphorus 3p and 3s orbitals (68.2% and 2.46% respectively). This was supported by DFT calculations where 80.9% of spin density was found to be localised on phosphorus atom. Weakly coordinating anions were also used to stabilise cyclic biradical cations synthesised by Schulz and colleagues where the spin density was found to reside exclusively on the phosphorus atoms (0.46e each) in the case of [P(μ-NTer)2P]•+. In the case of [P(μ-NTer)2As]•+ the spin was found to mostly reside on the As nuclei (70.6% on As compared to 29.4% on P atom). Many other cyclic radical cations have been reported. It is difficult to form radical cations with diphosphenes due to low lying HOMO at the phosphorus centre. Ghadwal and co-workers were able to synthesise a diphosphene radical cation [{(NHC)C(Ph)}P]2•+ using an NHC-derived divinyldiphosphene with a high lying HOMO and a small HOMO-LUMO gap. The stability of the species was identified as the delocalisation of the spin density across the CP2C-unit. The spin density was found to be 11-14% on each P nuclei and 17-21% on each C nuclei. Structure and properties A unique source of stability for phosphorus radical cations is the electrostatic repulsion between radical cations that prevents dimerisation. Weakly coordinating anions have been used to stabilise biradical cations. Phosphorus radical anions Synthesis The most common method for accessing radical anions is through the use of reducing agents. In 2014 the Wang group reported the synthesis of a phosphorus-centred radical anion through the reduction of a phosphaalkene using either Li in DME or K in THF yielding purple crystals. EPR data showed localisation of the spin on 3p (51.09%) and 3s (1.62%) orbitals of phosphorus. They later synthesised a diphosphorus-centred radial anion and the first di-radical di-anion from the reduction of the diphosphaalkene with KC8 in THF in the presence of 18-crown-6. In both cases the spin density resides principally on the phosphorus nuclei. Tan and co-workers used a charge transfer approach to synthesis the phosphorus radical anion coordinated CoII and FeII complexes. Here diazafluorenylidene-substituted phosphaalkene is reacted with low valent transition metal complexes to form phosphorus radical anions coordinated with metal complexes. This species displays a quartet ground state showing weak antiferromagnetic interaction of the phosphorus radical with the high-spim TMII ion. The spin density is mostly localised on TM and phosphorus nuclei. The group further synthesised radical anion lanthanide complexes which also showed antiferromagnetic interaction. The π-acid properties of boryl substituents were employed by Yamashita and co-workers to stabilise phosphorus radical anions. Here the diazafluorenylidene-substituted phosphaalkene is reacted with [Cp*2Ln][BPh4] (Ln = Dy, Tb, and Gd) followed by reduction with KC8 in the absence or presence of 2,2,2-cryptand yielding complexes with radical anion phosphaalkene fragments. EPR and DFT calculations indicate spin density mostly localised on the P nuclei (67.4%). Further reading Reviews Reactivity Potential applications References Chemistry Phosphorus compounds Free radicals
Stable phosphorus radicals
[ "Chemistry", "Biology" ]
3,800
[ "Senescence", "Free radicals", "Biomolecules" ]
72,420,474
https://en.wikipedia.org/wiki/Nano-ARPES
Nano Angle-Resolved Photoemission Spectroscopy (Nano-ARPES), is a variant of the experimental technique ARPES (Angle-Resolved Photoemission Spectroscopy). It has the ability to precisely determine the electronic band structure of materials in momentum space with submicron lateral resolution. Due to its demanding experimental setup, this technique is much less extended than ARPES, widely used in condensed matter physics to experimentally determine the electronic properties of a broad range of crystalline materials. Nano-ARPES can access the electronic structure of well-ordered monocrystalline solids with high energy, momentum, and lateral resolution, even if they are nanometric or heterogeneous mesoscopic samples. Nano-ARPES technique is also based on Einstein's photoelectric effect, being photon-in electron-out spectroscopy, which has converted into an essential tool in studying the electronic structure of nanomaterials, like quantum and low dimensional materials. NanoARPES allows to determine experimentally the relationship between the binding energies and wave momenta of the electrons of the occupied electronic states of the bands with energies close and approximately 10-15 eV below the Fermi level. These electrons are ejected from a solid when it is illuminated by monochromatic photons with sufficient energy to emit photoelectrons from the surface of the material. These photoelectrons are detected by an electron analyzer placed close to the samples surface in vacuum to preserve the uncontaminated surfaces and to avoid the collisions with particles able to modify the energy and trajectory of the photoelectrons in their way to the spectrometer. As in the photoemission process, the momentum is conserved; therefore, the angular distribution of photoelectrons from a monocrystal, even if it is a nanometric size, is also enabled to directly reveal the momentum distribution of initial electronic states in that crystal. The Nano-ARPES results, as in the ARPES technique, are traditionally shown as energy-momentum dispersion relation along the high symmetry directions of the irreducible Brillouin Zone, displaying the band dispersions of the investigated materials. When the emitted photoelectrons are shown by constant energy surfaces throughout large portions of the reciprocal space, Nano-ARPES can also precisely determine the Fermi surface of the investigated materials. Due to the unique ability to spatially map the electronic dispersion of the electrons in the samples, Nano-ARPES can also generate electronic imaging of nanomaterials with high binding energy and momentum resolution. As Nano-ARPES is a scanning technique, it can use state-of-the-art ARPES spectrometers without requiring them to be able also to discriminate spatially the origin of the analysed photoelectrons. Consequently, Nano-ARPES instrumentation can profit from the most advanced spectrometers developed for ARPES setups, particularly those of the latest generation electron spectrometers with bidimensional detection and high energy and momentum resolution. Background The comprehension of the electronic band structure of solids is applied in many fields of condensed matter physics, contributing to the microscopic understanding of many phenomenological trends and guiding the interpretation of experimental spectra in photoemission, optics, inelastic neutron scattering, specific heat, among others, including the effect of spin-polarisation. Most modern band electronic structure theoretical methods employ Density Functional Theory to solve the full many-body Schrödinger equation for electrons in a solid. The consolidated experimental and theoretical approach to describe the electronic structure of solids allows the straightforward visualization of the difference between conductors, insulators, and semiconductors according to the presence of permitted and forbidden electronic states of particular energy and momentum, which can be calculated by quantum mechanics and measured using ARPES. The ARPES technique has the unique ability to determine the band structure directly. It thus helps understand the degree and type of electron interaction in the solids, corroborating or contesting band electronic structure results calculated using different theoretical approaches. However, this technique's lateral resolution, manipulation, and orientation of submicrometric of heterogeneous samples are rather limited. That is because the electrons measured in ARPES are all those electrons ejected by the photo-absorption process prompted by the incident photons. If the illuminated area of the sample is large enough to cover nonhomogeneous areas, the detected ejected electrons are the sum up of all the photoelectrons emitted by all different illuminated patches. If each area has a distinctive electronic band structure, the ARPES spectra will show the average of all of them weighted according to the size of each different patch present in the illuminated area. In fact, many complex materials are constituted by disoriented small monocrystals or composed of several nanometric monocrystals. Traditional ARPES can only provide their average electronic structure if the patch size is smaller than the spot size of the ARPES setup, typically 200 um. This limitation is also present in samples with micrometric and submicrometric zones with distinctive chemical composition due to undesired side chemical reactions, for example, originating by the contamination or oxidation of the primitive sample. Hence, being the spot size of the monochromatic photon beam typically over 200 ųm side for conventional ARPES, only homogeneous samples with this size or bigger can be studied. Consequently, a sub-micrometric lateral resolution should be added to ARPES to perform the experimental determination of the electronic structure of small crystalline materials and large samples with heterogeneities. Nano-ARPES has implemented this lateral discrimination by focalising the size of the photon incident beam within the nanometric scale. Similarly to ARPES, the electronic band structure of nanomaterials can be directly measured using Nano-ARPES by measuring the ejected electrons' kinetic energy, velocity, and absolute momentum. The photon beam focusing to a spot size down to nanometric scale has been routinely achieved in a few well-known X-ray-based methods, such as scanning transmission X-ray microscopy (STXM) and scanning photoemission microscopy (SPEM). However, these techniques are much less demanding because they typically use incident photon energies higher than 150 eV and require non-angle resolved measurements, only recording integrated signals proportional to the X-Ray absorption coefficient and core-level photoelectrons, respectively. In both cases, the Fresnel Zone Plates (FZPs) performance is the essential component determining the lateral resolution, varying from micro- to nanometric lateral resolution. Nowadays, several companies in the market provide FZPs with a resolution better than 30 nm, which has facilitated the construction and operation of several x-Ray based microscopes such as STXM and SPEM instruments in different synchrotron radiation facilities like Elettra, ALS, CLS, and MAX-lab, among others. Nano-ARPES technique, however, requires much lower incident photon energy, typically from 6 eV to 100 eV) to detect those photoelectrons emitted by the electronic states below and close to the Fermi level, which cross-section increases as the incident photon energy decreases. An alternative k-space imaging approach is based on energy-filtered photoemission microscopes (PEEMs), The lateral resolution is achieved using an electron optical column instead of focalizing the incident photon beam. This full-field k-space version of PEEM is available commercially. However, for this commercially available full-field PEEM version with k-space imaging, achieving high energy and momentum resolution is challenging. Instrumentation Typically, high energy and momentum resolution ARPES experiments are performed at synchrotrons, which can provide bright and tunable high-energy photons sources to record the electronic band structure of ordered materials directly. That yields sharp and precise E vs k dispersions and constant energy surfaces, including those corresponding to the Fermi surface of the studied materials. The conventional ARPES systems consist of a monochromatic light source to deliver a narrow beam of photons, a sample holder connected to a manipulator used to position the samples angular and translationally concerning the electron spectrometer (detector), and the incident light beam focus. The equipment is contained within an ultra-high vacuum (UHV) environment, which protects the sample from undesired contamination and prevents scattering of the emitted electrons. After being dispersed along two perpendicular directions for kinetic energy and emission angle, the electrons are directed to the detector and counted to provide ARPES spectra-slices of the band structure along one momentum direction. The main difference of its typical instrumental setup from other conventional ARPES apparatus is that the soft x-ray beam is focused to a submicrometric spot using Fresnel Zone plates (FZP) lenses. The specimens can be mounted on a high-precision manipulator that ensures the nanoscale sample positioning in the x, y, and z directions, where the polar angle (Θ) and the azimuthal angle (Ψ) can also be automatically scanned. This basic instrumentation allows two operating modes: Nano-ARPES punctual mode (operating mode type 1) with nano-spot that maps the band structure of nanometric crystalline solids to study quasiparticle dynamics in highly correlated and non-correlated materials as in conventional ARPES, and Nano-ARPES imaging mode (operating mode type 2) that measures the spatial distribution in real space of photoelectrons of a selected binding energy and momentum values range. State-of-the-art Nano-ARPES microscopes are equipped with continuous interferometric control of the position of the samples for the FZPs, which avoids thermal and mechanical drifts. That is required to prevent undesirable distortions of the recorded Nano-ARPES images (operating mode type 2) and precision and reproducibility of E vs. k dispersion curves along specific directions of the reciprocal space. Energy constant surfaces in the reciprocal space & Fermi surface mapping In the Nano-ARPES setups, the used analyzers are the hemispherical electron energy typically installed in high energy and angular resolution conventional ARPES apparatus. They use a slit to prevent the mixing of momentum and energy channels and, consequently, can only take angular maps in one direction. To record maps over energy and two-dimensional momentum space as in conventional ARPES, either the sample needs to be rotated, or the collected photoelectrons beam should be discriminated inside the spectrometer with the electrostatic lens, keeping the sample fixed. The energy-angle-angle maps are converted to binding energy-k//x-k//y maps. These images display constant energy surfaces as a function of the reciprocal space's k//x and k//y waves vectors. The most remarkable constant energy surface is the Fermi surface map, obtained by detecting those photoelectrons with binding energy right at the Fermi level. Applications The Nano-ARPES technique is an essential tool to resolve the electronic band structure of mesoscopic or heterogeneous materials in diverse condensed Matter fields like quantum materials high-temperature superconductors, topological materials semiconductors metals, insulators with not-too-large band gap and in a wide variety of low dimensional materials and heterostructures with effects of confinements, different stackings and hybridization. Also, electronic structure changes associated with all types of phase transitions, charge density waves, bands hybridization phase separation, charge transfer, and in-operando devices can be revealed by combining nano-lateral resolution with high energy and momentum resolution. References Laboratory techniques in condensed matter physics Emission spectroscopy Electron spectroscopy
Nano-ARPES
[ "Physics", "Chemistry", "Materials_science" ]
2,377
[ "Spectrum (physical sciences)", "Electron spectroscopy", "Emission spectroscopy", "Laboratory techniques in condensed matter physics", "Condensed matter physics", "Spectroscopy" ]
75,208,716
https://en.wikipedia.org/wiki/Circular%20consensus%20sequencing
Circular consensus sequencing (CCS) is a DNA sequencing method that is used in conjunction with single-molecule real-time sequencing to yield highly accurate long-read sequencing datasets with read lengths averaging 15–25 kb with median accuracy greater than 99.9%. These long reads, which are created via the formation of consensus sequencing obtained from multiple passes on a single DNA molecule, can be used to improve results for complex applications such as single nucleotide and structural variant detection, genome assembly, assembly of difficult polyploid or highly repetitive genomes, and assembly of metagenomes. CCS allows resolution of large or complex genomes – such as the California Redwood genome, nine times the size of the human genome - of any species, including variant detection single nucleotide variants (SNVs) to structural variants, with high precision. CCS also enables separation of the different copies of each chromosome (e.g., maternal and paternal for diploid), known as haplotypes. CCS reads offer the benefits of high accuracy equivalent to short-read sequencing data, but with the length necessary for complex genome assemblies and phasing of variants across the genome. Technology In this method, circularized fragments of DNA in solution float across the surface of a nanofluidic chip called a SMRT (Single Molecule, Real-Time) Cell. The surface of the chip is covered with millions of wells called zero-mode waveguides (ZMWs), each a few nanometers wide. To prepare a sample for CCS/HiFi sequencing, primers and DNA polymerase are added to SMRTbell libraries. The circularized DNA becomes trapped in the ZMW, nucleotides are added, and the DNA polymerase enzyme begins to copy the molecule base by base. As this happens, a tiny amount of light is released and read by a detector, which helps the sequencer’s computer determine the order of bases present in the sample. The circularized DNA is sequenced in repeated passes to ensure accuracy – thus the name “circular” consensus sequencing – then  the primers and adapters are removed using bioinformatics to deliver a highly accurate consensus DNA read. In CCS, the genomic DNA is prepared without amplification such that individual base modifications such as methylation can be detected during sequencing. This allows for the capture of both sequence and valuable methylation information in a single experiment. History This sequencing method was first described by Travers, K.J., et al. in Nucleic Acids Research in 2010. It was later commercialized by Pacific Biosciences in 2018 and made available on Sequel II and Revio long-read sequencing instruments. CCS technology has subsequently been used to power numerous studies in several fields, including: Human, telomere-to-telomere, whole genome assembly and pangenome research, pediatric rare disease genomic analysis, understanding DNA methylation in a rare disease cohorts, assembly of whole genomes of non-human vertebrates, assembly of whole genomics of other agriculturally significant species, analysis of cancer genomes and Metagenomics and microbial research, among others. Recognizing the importance of this technology in future genomic exploration and discovery, the editors of Nature Methods named long-read sequencing technology its method of the year for 2022. Applications Human and conservation biology CCS can be useful to researchers seeking to perform de novo sequencing assembly or studying haplotyped phased sequences from each chromosomal copy, regardless of how many chromosomes are present in the species.Many biodiversity-oriented consortia have leveraged such technology to complete their conservation biology studies including African Biogenome Project, California Conservation Genomics Project, Darwin Tree of Life, Desert Agriculture Initiative, Earth Biogenome Project, Global Ant Genomics Alliance, Human Pangenome, Telomere-to-Telomere Consortium, The 10,000 Fish Genomes Project and Vertebrate Genomes Project. Human health Circular consensus sequencing is helping researchers identify and characterize rare or structural variants with high confidence to better identify the underlying genomics of a given phenotype, with numerous applications to human health including rare disease research, microbiology and infectious disease, cancer research, and other genetic disease research areas. Rare diseases Although they occur with low frequency in the human population, rare diseases as a collective are common and most have a genetic cause, presenting unique diagnostic challenges. An estimated 50–80% of structural variants are tandem repeats. Because CCS provides a comprehensive view of variation in the human genome, producing complete, accurate, and phased assemblies for variant calling, identification of repeat expansions and medically relevant interruption sequences, it is enabling the identification of causative pathogenic variants and helping researchers discover novel disease-associated genes. Microbiology and infectious diseases Circular consensus sequencing can rapidly identify emerging pathogens and/or detection of changing pathogen genomics as part of regional or global surveillance operations.Where other molecular technologies for public health surveillance may require re-validation or the development of new panels, the unbiased nature of circular consensus sequencing delivers comprehensive genetic information to further characterize global outbreaks, pandemics, and epidemics. Cancer research Comprehensive resolution of structural variants enables researchers to better study and detect somatic variants driving cancer. Because of their size (>50 bp), structural variants and tandem repeats account for much genomic variation between individuals. Long-read RNA sequencing can be useful in cancer research to uncover sources of alternative splicing and fusion events which power cancer growth. CCS also provides an advantage over other sequencing technologies as it can provide phasing information of expressed mutations. References DNA sequencing Biotechnology
Circular consensus sequencing
[ "Chemistry", "Biology" ]
1,158
[ "nan", "Molecular biology techniques", "DNA sequencing", "Biotechnology" ]
75,210,974
https://en.wikipedia.org/wiki/Krupp%E2%80%93Renn%20process
The Krupp–Renn process was a direct reduction steelmaking process used from the 1930s to the 1970s. It used a rotary furnace and was one of the few technically and commercially successful direct reduction processes in the world, acting as an alternative to blast furnaces due to their coke consumption. The Krupp-Renn process consumed mainly hard coal and had the unique characteristic of partially melting the charge. This method is beneficial for processing low-quality or non-melting ores, as their waste material forms a protective layer that can be easily separated from the iron. It generates Luppen, nodules of pre-reduced iron ore, which can be easily melted down. The first industrial furnaces emerged in the 1930s, firstly in Nazi Germany and then in the Japanese Empire. During the 1950s, new facilities were constructed, notably in Czechoslovakia and West Germany. The process was discontinued in the early 1970s, with a few nuances. It was unproductive, intricate to master, and only pertinent to certain ores. In the beginning of the 21st century, Japan modernized the process to manufacture ferronickel, which is the sole surviving variant. History Setting up The direct reduction of iron ore principle was tested in the late 19th century using high-temperature stirring of ore powder mixed with coal and a small amount of limestone to adjust the ore's acidity. Carl Wilhelm Siemens' direct reduction process, which was sporadically employed in the United States and United Kingdom in the 1880s, is particularly noteworthy. This process is based on using a 3-meter in diameter and similarly lengthy drum with a horizontal axis for blowing gases preheated by two regenerators. The metallurgy industry underwent much research regarding the implementation of rotary tubular furnaces, inspired by similar equipment used in cement works. The Basset process, developed during the 1930s, is capable of even producing molten cast iron. In the 1920s, German metallurgist , head of the metallurgy department at the and professor at the Clausthal University of Technology, explored the metallurgical applications of this type of furnace. He filed a series of patents for removing volatile metals from steel raw materials. During the 1930s Johannsen initiated the development of direct-reduction iron production. The first installation underwent testing from 1931 to 1933 at the Gruson plant in Magdeburg. Research on the Krupp-Renn process continued until 1939 at the Krupp facility in Essen-Borbeck. The process, named after the Krupp company that created it and the , translating to "low furnace," displayed potential. As a result, Krupp procured patents overseas to safeguard the invention after 1932. Adoption In 1945 there were 38 furnaces worldwide, each with a capacity of 1 Mt/year. The process was favored in Germany due to the autarky policy of the Nazi regime, which prioritized the use of low-quality domestic iron ore. The transfer of technology between Nazi Germany and Imperial Japan led to the Japanese Empire benefiting from this process. Furnaces were installed in the co-prosperity sphere and operated by Japanese technicians. By the eve of the Pacific War, the process was being used in four steelworks in Japan. After World War II all installations in Germany, China, and North Korea were dismantled, with 29 furnaces sent to the USSR as war reparations. Only the Japanese and Czechoslovakian plants remained functional. In the 1950s Krupp rebuilt several large furnaces in Spain, Greece, and Germany. The Czechoslovakians were the primary drivers, constructing 16 furnaces and increasing process efficiency. The Great Soviet Encyclopedia reports that over 65 industrial plants, ranging from 60 to 110 meters in length and 3.6 to 4.6 meters in diameter, were constructed between 1930 and 1950. By 1960, 50 furnaces were producing 2 million tons per year in several countries. Disappearance The Soviet Union recovered 29 furnaces as war damage, but failed to gain significant profits from them. According to sources, the Red Army's destructive techniques in dismantling German industrial plants proved inappropriate and wasted valuable resources. It was also challenging for Russians to reconstruct these factories within the Soviet Union. Travelers from Berlin to Moscow reported observing German machinery scattered, largely deteriorating, along every meter of track and shoulder, suffering from the harsh climatic conditions. The Russian iron and steel industry did not heavily rely on technological input from the West. Eventually, the Eastern Bloc only maintained this marginal technology to a limited extent in the recently sovietized European countries, where it was eventually abandoned. Meanwhile large furnaces rebuilt in the 1950s in West Germany operated for approximately ten years before shutting down, due to the low cost of scrap and imported ore. The process then vanished from West Germany, concurrently with Western Europe. In Japan furnaces also progressed towards increasingly bigger tools. However, the dwindling of local ferruginous sand deposits, along with the low cost of scrap and imported ores, eventually resulted in the gradual discontinuation of the process. The process was steadily improved by the Japanese, who developed it under various names for specialized products including ferroalloys and the recycling of steelmaking by-products. Currently, at the start of the 21st century, the Krupp-Renn process is exclusively used for ferronickel production in Japan. By 1972 most plants in Czechoslovakia, Japan, and West Germany had ceased operations. The process was widely considered obsolete and no longer garnered the attention of industrialists. Process General principles The Krupp–Renn process is a direct reduction process that uses a long tubular furnace similar to those found in cement production. The most recent units constructed have a diameter of approximately 4.5 meters and a length of 110 meters. The residence time of the product is influenced by the slope and speed of rotation of the rotary kiln, which is inclined at an angle of roughly 2.5 percent. Prior to usage, the iron ore is crushed to less than 6 mm in particle size. The iron ore is introduced into the furnace upstream and mixed with a small amount of fuel, typically hard coal. After 6 to 8 hours, it exits the furnace as pre-reduced iron ore at 1,000 °C. The amount of iron recovered ranges from 94% to 97.5% of the initial iron in the ore. A burner located at the lower end of the furnace provides heat, transforming it into a counter-current reactor. The fuel comprises finely pulverized coal, which, upon high-temperature combustion, generates reducing gas primarily consisting of CO. Once the furnace reaches an optimal temperature, the ore-coal mixture can serve as the primary fuel source. The fumes exiting the furnace's upper end attain temperatures ranging from 850 to 900 °C and are subsequently cooled and purged of dust by water injection before discharge through the chimney. The process is efficient in producing ferronickel due to the proximity of its constituent elements. At 800 °C, carbon easily reduces iron and nickel oxides, while the gangue's other oxides are not significantly reduced. Specifically, iron(II) oxide (or wustite), which is the stable iron oxide at 800 °C, has a reducibility similar to that of nickel(II) oxide, making it impossible to reduce one without reducing the other. Process characteristics The rotary kiln's maximum temperature ranges between 1,230 and 1,260 °C, which significantly exceeds the 1,000 to 1,050 °C threshold for iron oxide reduction. The main objective is to achieve a paste-like consistency of the ore gangue. The reduced iron agglomerates into 3 to 8 mm metal nodules called . If the infusibility of the gangue is high, the temperature must be increased, up to 1,400 °C for a basic charge. It is crucial to control the gangue's hot viscosity. Among rotary drum direct reduction processes, it stands out for using high temperatures. Another distinctive attribute of the procedure involves introducing powdered coal to the furnace outlet. Furthermore, the process has evolved to enable terminating the supply of coal and running exclusively on the coal dust or coke dust introduced with the ore. In this situation, solely combustion air is injected at the furnace outlet. Thermal efficiency is improved in shaft furnaces such as blast furnaces compared to rotary furnaces due to the air absorbing some of the Luppen heat. However, the oxygen in the air partially re-oxidizes the product, meaning that the Luppen is still altered by contact with air at the end or after leaving the furnace, despite complete reduction of iron in the furnace. The hot assembly is discharged from the furnace and then rapidly cooled and crushed. The iron is separated from the slag via magnetic separation. Magnetically intermediate fines make up 5–15% of the charge. While partial melting of the charge leads to the increased density of the products, it also requires significant energy consumption. Load behavior as it passes through the furnace The furnace comprises three distinct zones: Firstly, the preheating zone heats the ore to 800 °C using the hot fumes within the furnace. Ore reduction occurs only if temperatures exceed 900-1,000 °C, while the coal releases its most volatile constituents. Secondly, the reduction zone is situated in the middle of the furnace, where coal and iron oxides combine to produce carbon monoxide. The carbon monoxide is released from the charge, generating a gaseous layer that shields the charge against the oxidizing air circulating above. As a consequence, this excessive gas is combusted, raising the temperature of the furnace walls, which then transfer the heat back to the charge due to rotary motion. The temperature eventually increases to 800 – 1,200 °C. Subsequently, the iron oxides are gradually altered into ferronickel or metallic iron. The metal produced is in the form of metallic sponge particles that are finely dispersed in the powdery gangue. Reduction is complete by the end of the furnace, and there is a minimal amount of CO produced. This is due to the fact that the charge is no longer protected from oxidation by the air blown in at the base of the furnace. As a result, a violent but shallow reoxidation of the iron occurs. Some of the oxidized iron is returned to the core of the charge by rotation where it is further reduced with residual coal. The remaining material mixes with waste to create a thick slag that cannot blend with the produced metal. This extremely hot reaction melts the non-oxidized iron and nickel, which clump together forming nodules named Luppen. Control of temperature is critical in regards to the ore's physicochemical characteristics. Overly high temperatures or unsuitable granulometry lead to the creation of rings of sintered material that accumulate on the walls of the furnace. Typically, a ring of iron-poor slag, known as slag, is formed at two-thirds of the distance along the furnace. Similarly, a metal ring usually forms around ten meters from the outlet. These rings disturb the flow of materials and gas, diminishing the furnace's useful capacity, sometimes completely obstructing it. The process's revival is hindered by the formation of a ring, particularly in China. In the early 21st century, industrialists abandoned its adoption after recognizing how critical and challenging managing this parameter was. While slag melting consumes energy, it enables us to govern the charge's behavior in the furnace. Additionally, we need a minimum of 800 to 1,000 kg of slag per ton of iron to prevent Luppen from growing too big. Slag limits coal segregation as coal is much less dense than ore and would float to the surface of the mixture. It transforms into a paste that guards the metal against oxidation when heated and simplifies both Luppen processing and furnace cleaning during maintenance shutdowns through vitrification when it gets cold. Performance with low-grade ores The Krupp-Renn process is suitable for producing pre-reduced iron ore from highly siliceous and acidic ores (CaO/SiO2 basicity index of 0.1 to 0.4), which begin generating a pasty slag at 1,200 °C. Additionally, due to the slag's acidity, it becomes vitreous, facilitating separation from the iron through easy crushing. Furthermore, this process is also ideal for treating ores with high concentrations of titanium dioxide. Due to its ability to cause slag to become especially infusible and viscous, ores that contain this oxide cannot be used with blast furnaces as they must remove all their production in liquid form. For this reason, the preferred ores for this technique are those that would become uneconomical if they had to be modified with basic additives, usually those with a low iron content (between 35 and 51%), and whose gangue needs to be neutralized. Integrated into a steelmaking complex, the Krupp-Renn process provides an alternative to sinter plants or beneficiation processes, effectively eliminating waste rock and undesired elements like zinc, lead, and tin. In a blast furnace, these elements undergo vaporization-condensation cycles which progressively saturates the furnace. However, with the Krupp-Renn process, the high temperature of the fumes prevents condensation within the furnace, before they are retrieved by the dust-removal system. The process recovers by-products or extracts specific metals. The Luppen is subsequently remelted in either the blast furnace or the cupola furnace, or the Martin-Siemens furnace, because it involves melting a pre-reduced, iron-rich charge. The process has been effective in treating ores abundant in nickel(II) oxide, vanadium, and other metals. Additionally, the process is applicable in the production of ferronickel. In this instance, saprolitic ores with a high magnesium content are as infusible as highly acidic ores, distinguishing their relevance to the process. Direct reduction methods such as this one offer the flexibility of using any solid fuel and in this case, 240 to 300 kg of hard coal is needed to process one metric ton of iron ore that contains 30 to 40% iron. Assuming a consumption of 300 kg/ton of ore at 30%, the hard coal consumption is 800 kg per ton of iron. Additionally, 300 kg of coke is consumed during the smelting of Luppen in the blast furnace. When this ore is smelted entirely in the blast furnace, total fuel consumption remains the same. However, it only uses coke, which is a much more expensive fuel than hard coal. However, using slags with over 60% silica content, making them acidic, contradicts metal desulfurization that demands highly basic slags. Consequently, 30% of the fuel's sulfur settles in the iron, entailing expensive after-treatments to eliminate it. Productivity Depending on the ore and plant size, a furnace can daily output 250 to 800 tons of pre-reduced iron ore. The biggest furnaces, up to 5 meters in diameter and 110 meters long, can process 950 to 1,000 tons of ore daily, excluding fuel. A properly operated plant typically runs for around 300 days per year. The internal refractory typically lasts 7 to 8 months in the most exposed part of the furnace and for 2 years elsewhere. In 1960, a Krupp-Renn furnace using low-grade ore yielded 100 kilotons of iron annually, while a contemporaneous modern blast furnace produced ten times as much cast iron. Direct reduction processes employing rotary furnaces frequently face a significant challenge due to the localized formation of iron and slag rings, which sinter together and gradually obstruct the furnace. Understanding the mechanism of lining formation is a complex process involving mineralogy, chemical reactions, and ore preparation. The formation of the lining ring, which progressively grows and poisons the furnace, is caused by a few elements in minute quantities. To remedy this, increasing the supply of combustion air or interrupting the furnace charging process are effective solutions. Otherwise, it may be necessary to adjust the grain size of the charged ore or the chemical composition of the mineral blend. In 1958, Krupp constructed a plant that could generate 420,000 tons per year of pre-reduced iron ore (consisting of six furnaces) which had an estimated value of 90 million Deutsche Mark, or 21.4 million dollars. By contrast, the plant erected in Salzgitter-Watenstedt in 1956–1957, which was well-integrated with an existing steelworks, only cost 33 million Deutsche Mark. At that time, a Krupp-Renn plant presented itself as a feasible substitute to the established blast furnace process, considering its investment and operating costs: initial investment cost per ton produced was nearly half and operating costs were roughly two and a half times greater. The slag, a glassy silica, can be effortlessly employed as an additive for constructing road surfaces or concrete. However, the method does not produce a recoverable gas similar to blast furnace gas, decreasing its profitability in most cases. Nevertheless, it also solves the issue regarding gas recovery. Plants built Heritage Evolution In view of its performance, the process seemed a suitable basis for the development of more efficient variants. Around 1940, the Japanese built several small reduction furnaces operating at lower temperatures: one at Tsukiji (1.8 m × 60 m), two at Hachinohe (2 furnaces of 2.8 m × 50 m), and three at Takasago (2 furnaces of 1.83 m × 27 m and 1 furnace of 1.25 m × 17 m). However, since they do not produce Luppen, they cannot be equated with the Krupp-Renn process. Although direct reduction in a rotary furnace has been the subject of numerous developments, the logical descendant of the Krupp-Renn process is the "Krupp-CODIR process". Developed in the 1970s, it is based on the general principles of the Krupp-Renn process with a lower temperature reduction, typically between 950 and 1,050 °C, which saves fuel but is insufficient to achieve partial melting of the charge. The addition of basic corrective additives (generally limestone or dolomite) mixed with the ore allows the removal of sulfur from the coal, although the thermolysis of these additives is highly endothermic. This process has been adopted by three plants: 'Dunswart Iron & Steel Works' in South Africa in 1973, 'Sunflag Iron and Steel' in 1989, and 'Goldstar Steel & Alloy' in India in 1993. Although the industrial application is now well established, the process has not had the impact of its predecessor. Finally, there are many post-Krupp-Renn direct reduction processes based on a tubular rotary furnace. At the beginning of the 21st century, their combined output represented between 1% and 2% of world steel production. In 1935 and 1960, the output of the Krupp-Renn process (1 and 2 million tons respectively) represented just under 1% of world steel production. Treatment of ferrous by-products The Krupp-Renn process, which specialized in the beneficiation of poor ores, was the logical basis for the development of recycling processes for ferrous by-products. In 1957, Krupp tested a furnace at for the treatment of roasted pyrites to extract iron (in the form of Luppen) and zinc (vaporized in the flue gases). This process is therefore a hybrid of the Waelz and Krupp-Renn processes, which is why it is called the "Krupp-Waelz" (or "Renn-Waelz") process. The trials were limited to a single 2.75 m × 40 m demonstrator capable of processing 70 to 80 t/day and were not followed up. The technical relationship between Krupp-Renn and Japanese direct reduction production processes is often cited. In the 1960s, Japanese steelmakers, sharing the observation that furnace plugging was difficult to control, developed their own low-temperature variants of the Krupp-Renn process. Kawasaki Steel commissioned a direct-reduction furnace at its (1968) and (1975) plants, the most visible feature of which was a pelletizing unit for the site's steelmaking by-products (sludge and dust from the cleaning of converter and blast furnace gases). The "Kawasaki process" also incorporates other developments, such as the combustion of oil instead of pulverized coal and the use of coke powder instead of coal mixed with ore... Almost identical to the Kawasaki process (with a more elaborate pelletizing unit), the "Koho process" was adopted by Nippon Steel, which commissioned a plant of this type at the in 1971. The Ōeyama process The production of ferronickel from laterites takes place in a context that is much more favorable to the Krupp-Renn process than to the steel industry. Lateritic ores in the form of saprolite are poor, very basic and contain iron. Production volumes are moderate, and the nickel chemistry is remarkably amenable to rotary kiln reduction. The process is therefore attractive, but regardless of the metal extracted, mastering all the physical and chemical transformations in a single reactor is a real challenge. The failure of the Larco plant at Lárymna, Greece, illustrates the risk involved in adopting this process: it was only when the ore was ready for industrial processing that it proved incompatible with the Krupp-Renn process. As a result, lower-temperature reduction followed by electric furnace smelting allows each stage to have its own dedicated tool for greater simplicity and efficiency. Developed in 1950 at the in New Caledonia, this combination has proven to be both cost-effective and, above all, more robust. Large rotating drums (5 m in diameter and 100 m or even 185 m long) are used to produce a dry powder from nickel ore concentrate. This powder contains 1.5 to 3% nickel. It leaves the drum at 800–900 °C and is immediately melted in electric furnaces. Only partial reduction takes place in the drums: a quarter of the nickel comes out in metallic form, the rest is still oxidized. Only 5% of the iron is reduced to metal, leaving unburned coal as fuel for the subsequent melting stage in the electric furnace. This proven process (also known as the RKEF process, for Rotary Kiln-Electric Furnace) has become the norm: at the beginning of the 21st century, it accounted for almost all nickel laterite processing. In the early 21st century, however, the Nihon Yakin Kogyo foundry in Ōeyama, Japan, continued to use the Krupp-Renn process to produce intermediate grade ferronickel (23% nickel), sometimes called nickel pig iron. With a monthly output of 1,000 tons of Luppen and a production capacity of 13 kt/year, the plant is operating at full capacity. It is the only plant in the world using this process. It is also the only plant using a direct reduction process to extract nickel from laterite. The process, which has been significantly upgraded, is called the "Ōeyama process". The Ōeyama process differs from the Krupp-Renn process in the use of limestone and the briquetting of the ore prior to charging. It retains its advantages, which are the concentration of all pyrometallurgical reactions in a single reactor and the use of standard (i.e. non-coking) coal, which covers 90% of the energy requirements of the process. Coal consumption is only 140 kg per ton of dry laterite, and the quality of the ferronickel obtained is compatible with direct use by the steel industry. Although marginal, the Krupp-Renn process remains a modern, high-capacity process for the production of nickel pig iron. In this context, it remains a systematically studied alternative to the RKEF process and the "sinter plant-blast furnace" combination. See also Direct reduction Direct reduced iron :fr:Histoire de la production de l'acier :fr:Friedrich Johannsen Notes References Bibliography Chemistry Iron Metals Alloys Metallurgy Blast furnaces Ore deposits
Krupp–Renn process
[ "Chemistry", "Materials_science", "Engineering" ]
5,024
[ "Metals", "Metallurgy", "History of metallurgy", "Materials science", "Blast furnaces", "Alloys", "Chemical mixtures", "nan" ]
65,289,048
https://en.wikipedia.org/wiki/Potamology
Potamology (from - river, - science) is the study of rivers, a branch of hydrology. The subject of study is the hydrological processes of rivers, the morphometry of river basins, the structure of river networks; channel processes, regime of river mouth areas; evaporation and infiltration of water in a river basin; water, thermal, ice regime of rivers; sediment regime; sources and types of rivers feeding, and various chemical and physical processes in rivers. Bibliography Lindeman, R. L., "The trophic-dynamic aspect of ecology", Ecology, 1942, XXIII, pp. 399–418. Williams, R. B., "Computer simulation of energy flow in Cedar Bog Lake, Minnesota, based on the classical studies of Lindeman", Systems analysis and simulation in ecology (a cura di B. C. Patten), vol. I, New York 1971, pp. 543–582. Hydrology
Potamology
[ "Chemistry", "Engineering", "Environmental_science" ]
207
[ "Hydrology", "Hydrology stubs", "Environmental engineering" ]
73,871,740
https://en.wikipedia.org/wiki/List%20of%20spacetimes
This is a list of well-known spacetimes in general relativity. Where the metric tensor is given, a particular choice of coordinates is used, but there are often other useful choices of coordinate available. In general relativity, spacetime is described mathematically by a metric tensor (on a smooth manifold), conventionally denoted or . This metric is sufficient to formulate the vacuum Einstein field equations. If matter is included, described by a stress-energy tensor, then one has the Einstein field equations with matter. On certain regions of spacetime (and possibly the entire spacetime) one can describe the points by a set of coordinates. In this case, the metric can be written down in terms of the coordinates, or more precisely, the coordinate one-forms and coordinates. During the course of the development of the field of general relativity, a number of explicit metrics have been found which satisfy the Einstein field equations, a number of which are collected here. These model various phenomena in general relativity, such as possibly charged or rotating black holes and cosmological models of the universe. On the other hand, some of the spacetimes are more for academic or pedagogical interest rather than modelling physical phenomena. Maximally symmetric spacetimes These are spacetimes which admit the maximum number of isometries or Killing vector fields for a given dimension, and each of these can be formulated in an arbitrary number of dimensions. Minkowski spacetime de-Sitter spacetime where is real and is the standard hyperbolic metric. Anti de-Sitter spacetime Black hole spacetimes These spacetimes model black holes. The Schwarzschild and Reissner–Nordstrom black holes are spherically symmetric, while Schwarzschild and Kerr are electrically neutral. Schwarzschild spacetime where is the round metric on the sphere, and is a positive, real parameter. Kruskal spacetime (Maximally extended Schwarzschild spacetime) where is defined implicitly. Reissner–Nordstrom spacetime Kerr spacetime Kerr–Newman spacetime See Boyer–Lindquist coordinates for details on the terms appearing in this formula. Cosmological spacetimes FLRW spacetime , where is often restricted to take values in the set . Lemaître–Tolman spacetime Gravitational wave spacetimes pp-wave spacetime Other Spherically symmetric spacetime Asymptotically flat spacetime Non-relativistic spacetime Static spacetime Einstein static universe spacetime Alcubierre spacetime Ellis wormhole spacetime Gödel spacetime Taub–NUT spacetime Kasner spacetime Mixmaster spacetime See also Spacetime symmetries Quantum spacetime References Sources General Relativity, R. Wald, The University of Chicago Press, 1984, Spacetime and Geometry, S. Carroll, Cambridge University Press, 2019, General relativity
List of spacetimes
[ "Physics" ]
585
[ "General relativity", "Theory of relativity" ]
73,876,743
https://en.wikipedia.org/wiki/Anaxam
ANAXAM stands for "Analytics with Neutrons And X-rays for Advanced Manufacturing" and it is a knowledge and technology transfer centre in Switzerland. Anaxam is part of the federal government's "Digitalisation" action plan and a member of the Advanced Manufacturing Technology Transfer Centers (AM-TTC) association. Anaxam is located on the Park Innovaare campus in the canton of Aargau. It is a non-profit organisation, that aims to provide industry with access to advanced analytical methods originally developed for basic research. Anaxam works with project partners on the basis of "public-private partnerships". The centre provides industry with materials analysis using neutron and synchrotron radiation (X-rays) in the field of non-destructive material testing. The technologies offered support companies in the optimisation of processes and products as well as in quality control and quality assurance. The project partners come from the raw materials industry, the metal industry, medical technology, the pharmaceutical industry and the automotive industry, among others. They include large companies as well as SMEs, regional companies as well as national and international companies. Amongst others, ANAXAM uses the large-scale research facilities of the Paul Scherrer Institute (PSI) – particularly the Swiss Spallation Neutron Source (SINQ) and the Swiss Light Source (SLS). ANAXAM is located in the immediate vicinity of the Paul Scherrer Institute on the Park Innovaare Campus in Villigen, Switzerland. Structure The knowledge and technology transfer centre is part of Switzerland’s ‘Digitalisation’ action plan. ANAXAM is a part of the innovation landscape in the canton of Aargau. References External links ANAXAM PSI FHNW Swiss Nanoscience Institute Canton of Aargau ANAXAM-Business Report 2021 (german) ANAXAM brochure Materials science Technology transfer Particle physics facilities Accelerator physics Synchrotron radiation Industrial processes
Anaxam
[ "Physics", "Materials_science", "Engineering" ]
397
[ "Applied and interdisciplinary physics", "Materials science", "Experimental physics", "nan", "Accelerator physics" ]
78,203,646
https://en.wikipedia.org/wiki/K.%20Andre%20Mkhoyan
K. Andre Mkhoyan (born 1974) is the Ray D. and Mary T. Johnson Chair and Professor in the Department of Chemical Engineering and Materials Science at the University of Minnesota. He is recognized for advancing both fundamental scientific understanding and diverse applications of scanning transmission electron microscopy (STEM) techniques. He was elected as a Fellow of the Microscopy Society of America in 2024 for "seminal contributions to the understanding of electron beam channeling, quantification of imaging and spectroscopy in STEM, and for his discovery of fundamentally new behavior in crystal point and line defects using STEM." According to Web of Science, he has produced over 180 published works that have been cited over 9800 times, with an h-index of 44 as of October 25, 2024. Early life and education Andre Mkhoyan was born in 1974 in Yerevan, Armenia. He received his high school diploma from the prestigious Artashes Shahinyan Physics-Mathematics School. In 1996, Mkhoyan graduated with a B.Sc. (Hons.) in Physics from the Yerevan State University. Following graduation, he moved to the United States to pursue a research career specialized in transmission electron microscopy (TEM). After working at Bell Labs as a research scientist in support of the SCALPEL projection electron-beam lithography project, he began graduate studies in Applied and Engineering Physics at Cornell University in 1999. Working under the supervision of Professor John Silcox (1935–2024), Mkhoyan received his M.S. in Engineering Physics in 2003 and Ph.D. in Applied Physics in 2004. His dissertation was titled "Scanning Transmission Electron Microscopy Study of III-V Nitrides." In his postdoctoral research studies, Mkhoyan worked with Professor Silcox at Cornell as well as with Dr. Philip E. Batson at the IBM TJ Watson Lab. In this period, he was one of the first people to work on a prototype aberration-corrected STEM on a VG microscope retro-fitted with a NION probe-corrector. He worked extensively on understanding electron beam channeling, spectroscopy and quantification methods during this time. In 2008, Mkhoyan joined the University of Minnesota as an Assistant Professor of Chemical Engineering and Materials Science where he continues to be a professor and lead an electron microscopy research group. He is also the chair for the Electron Microscopy Management Committee at the University of Minnesota. Scientific contributions Mkhoyan has contributed to fundamental studies of electron microscopy physics as well as pioneered challenging applications of high resolution analytical STEM, specifically, multimodal use of EELS and EDX signals in conjunction with annular dark-field imaging to materials science. He has been credited for pushing the boundaries of atomic-resolution analytical STEM for the better understanding of point, line and planar defects, including discoveries of two new line defects and interaction mechanisms between dopant atoms and dislocations. He has worked extensively on development of new methods of STEM-based atomic-resolution imaging and EELS spectroscopy of highly-beam-sensitive zeolites and metal organic frameworks. and atomic-resolution in-situ STEM work. His research has improved understanding of electron beam channeling and quantification of imaging and EELS spectroscopy in STEM, which includes identification of the factors behind electron beam channeling even in non-periodic crystals and discovery of new kind of sub-atomic beam channeling. Works Andre Mkhoyan has authored numerous journal articles describing significant advances in STEM, defects in materials, nanoporous materials, complex oxides, and two-dimensional/single-layer materials which includes but is not limited to: Dopant Segregation Inside and Outside Dislocation Cores in Perovskite BaSnO3 and Reconstruction of the Local Atomic and Electronic Structures; H. Yun; A. Prakash; T. Birol; B. Jalan, K. A. Mkhoyan; Nano Lett. 21, 10, 4357 (2021) Metallic line defect in wide-bandgap transparent perovskite BaSnO3; H. Yun, M. Topsakal, A. Prakash, B. Jalan, J. S. Jeong, T. Birol, K. A. Mkhoyan; Sci. Adv. 7, eabd4449 (2021) Uncovering atomic migrations behind magnetic tunnel junction breakdown; H. Yun, D. Lyu, Y. Lv, B. R. Zink, P. Khanal, B. Zhou, W. G. Wang, J. P. Wang, K. A. Mkhoyan; ACS Nano 18, 25708 (2024) One-dimensional intergrowths in two-dimensional zeolite nanosheets and their effect on ultra-selective transport; P. Kumar, D. W. Kim, N. Rangnekar, H. Xu, E. O. Fetisov, S. Ghosh, H. Zhang, Q. Xiao, M. Shete, J. I. Siepmann, T. Dumitrica, B. McCool, M. Tsapatsis, K. A. Mkhoyan; Nature Mater. 19, 443 (2020) A New Line Defect in NdTiO3 Perovskite; J. S. Jeong, M. Topsakal, P. Xu, B. Jalan, R. M. Wentzcovitch, K. A. Mkhoyan; Nano Lett. 16, 6816 (2016) Probing core-electron orbitals by scanning transmission electron microscopy and measuring the delocalization of core-level excitations; J. S. Jeong, M. L. Odlyzko, P. Xu, B. Jalan, and K. A. Mkhoyan; Phys. Rev. B 93, 165140 (2016) Imaging 'Invisible' Dopant Atoms in Semiconductor Nanocrystals; A. A. Gunawan, K. A. Mkhoyan, A. W. Wills, M. G. Thomas, D. J. Norris; Nano Lett. 11, 5553 (2011) Radiolysis to knock-on damage transition in zeolites under electron beam irradiation; O. Ugurlu, J. Haus, A. A. Gunawan, M. G. Thomas, S. Maheshwari, M. Tsapatsis, K. A. Mkhoyan; Phys Rev B, 83, 113408 (2011) Atomic and Electronic Structure of Graphene-Oxide; K.A. Mkhoyan, A.W. Contryman, J. Silcox, D.A. Stewart, G. Eda, C. Mattevi, S. Miller, M. Chhowalla; Nano Lett. 9, 1058 (2009) Full recovery of electron damage in glass at ambient temperatures; K.A. Mkhoyan, J. Silcox, A. Ellison, D. Ast, R. Dieckmann, Phys. Rev. Lett. 96, 205506 (2006) References 1974 births Living people Cornell University alumni People from Yerevan Microscopists Condensed matter physicists Electron microscopy Materials scientists and engineers Scientists at Bell Labs American materials scientists American physicists Armenian emigrants to the United States Minnesota CEMS University of Minnesota
K. Andre Mkhoyan
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,504
[ "Electron", "Electron microscopy", "Condensed matter physicists", "Microscopists", "Materials science", "Materials scientists and engineers", "Condensed matter physics", "Microscopy" ]
78,205,862
https://en.wikipedia.org/wiki/Lqh%CE%B1IT
Alpha-Insect Toxin LqhαIT is a neurotoxic protein found in the venom of the Leiurus hebraeus, commonly known as the Hebrew deathstalker scorpion. It is classified as an alpha-toxin due to its effect on insect voltage-gated sodium channels, causing prolonged neuronal firing that leads to paralysis in affected insects. This toxin has been widely studied for its unique interaction with insect nervous systems and has potential applications in neurophysiological research. Structure and Mechanism LqhαIT is part of the larger family of scorpion alpha-toxins. that act specifically on insect sodium channels. The primary structure of LqhαIT consists of a polypeptide chain with several disulfide bridges, contributing to its stability and resistance to degradation. These disulfide bonds are essential for maintaining the conformation needed to bind effectively to target sodium channels in insect nerve cells. LqhαIT binds to voltage-gated sodium channels in insect neurons, causing a prolonged opening of the channels. This action prevents the neurons from returning to their resting state, leading to continuous firing and eventually paralysis. This mechanism is specific to insect sodium channels, which makes LqhαIT highly selective, with limited effects on mammalian sodium channels. Biological Function The primary function of LqhαIT is to immobilize prey, particularly insects, by inducing rapid neurotoxic effects. Upon envenomation, LqhαIT binds to the insect's sodium channels, leading to hyperexcitation and paralysis. This allows the scorpion to subdue its prey quickly and effectively. The specificity of LqhαIT for insect sodium channels also plays a role in the evolutionary adaptation of Leiurus hebraeus, helping it to target insect prey within its native desert ecosystem. Research and Applications Neurophysiological Research: LqhαIT's specificity for insect sodium channels has made it a valuable tool in neurophysiological research. Scientists use this toxin to study the role of sodium channels in neuronal function and to better understand the differences between insect and mammalian ion channel structures. LqhαIT also serves as a model for studying the structure-function relationship of neurotoxins, as it exhibits highly selective binding characteristics that are important for developing novel bioinsecticides. LqhαIT: Structure and Functional Insights As one of the most potent scorpion α-neurotoxins targeting insects, LqhαIT serves as a crucial model for understanding the structural basis of selective toxicity and biological activity among α-neurotoxins. Its structure was determined through proton two-dimensional nuclear magnetic resonance spectroscopy (2D NMR), revealing detailed conformational features and providing insights into the interactions that underlie its insecticidal potency. Apo Structure The solution structure of LqhαIT was determined using 2D NMR. The structural features include: Secondary Structure: LqhαIT consists of an α-helix and a three-strand antiparallel β-sheet. These elements are stabilized by three type I tight turns and a five-residue turn. Hydrophobic Patch: A distinct hydrophobic patch, characteristic of scorpion neurotoxins, includes tyrosine and tryptophan residues arranged in a "herringbone" pattern. This region likely contributes to toxin stability and interaction with insect sodium channels. Comparison with Anti-mammalian α-Toxin (AaHII) The polypeptide backbone of LqhαIT closely resembles that of AaHII, an antimammalian α-toxin from Androctonus australis Hector, sharing approximately 60% amino acid sequence similarity. However, critical structural differences exist between the two, particularly in the five-residue turn involving Lys8-Cys12, the C-terminal segment, and the relative orientation of these regions. These variations are thought to underpin LqhαIT's selectivity for insect sodium channels, whereas AaHII is more effective against mammalian targets CryoEM structure of LqhαIT bound to NavPas Scorpion α-toxin LqhαIT exerts its potent insecticidal effects by specifically binding to a unique glycan on the insect voltage-gated sodium (Nav) channel. Cryo-electron microscopy (cryo-EM) studies have elucidated the structure of LqhαIT in complex with the insect Nav channel, revealing the intricate interactions between the toxin and the glycan scaffold attached to asparagine 330 on the channel. This glycan provides a distinct epitope that facilitates selective binding of LqhαIT to insect channels, stabilizing the voltage sensor domain in an inactive "S4 down" conformation. This mechanism contrasts with similar toxins that target mammalian channels, highlighting LqhαIT's specificity and effectiveness due to its selectivity. Further studies demonstrated that LqhαIT contains an NC-domain epitope, including residues critical for binding to the glycan scaffold, enabling the toxin to maintain a stable interaction with the Nav channel. Molecular dynamics simulations confirm the stability of these interactions, including hydrogen bonds and salt bridges, which remain consistent throughout the simulations. This glycosylation binding contributes to the potency of LqhαIT and offers insights into the design of insect-specific Nav channel modulators. The structure-function relationship observed here underscores the utility of such toxins as models for developing targeted Nav channel modulators with minimal off-target effects on mammalian systems. Toxicology and Safety While LqhαIT is toxic to insects, it exhibits minimal toxicity to mammals, including humans. This specificity is due to structural differences in mammalian sodium channels, which do not interact with LqhαIT in the same way as insect channels. However, the venom of Leiurus hebraeus as a whole can still pose significant risks to humans, as it contains other potent toxins targeting various components of the nervous system. Proper safety measures are necessary when handling scorpion venom in laboratory settings to prevent accidental envenomation. See also Leiurus hebraeus Scorpion venom Voltage-gated sodium channels References Scorpion toxins Peptides
LqhαIT
[ "Chemistry" ]
1,296
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
78,209,653
https://en.wikipedia.org/wiki/Three-factor%20learning
In neuroscience and machine learning, three-factor learning is the combinaison of Hebbian plasticity with a third modulatory factor to stabilise and enhance synaptic learning. This third factor can represent various signals such as reward, punishment, error, surprise, or novelty, often implemented through neuromodulators. Description Three-factor learning introduces the concept of eligibility traces, which flag synapses for potential modification pending the arrival of the third factor, and helps temporal credit assignement by bridging the gap between rapid neuronal firing and slower behavioral timescales, from which learning can be done. Biological basis for Three-factor learning rules have been supported by experimental evidence. This approach addresses the instability of classical Hebbian learning by minimizing autocorrelation and maximizing cross-correlation between inputs. References Machine learning
Three-factor learning
[ "Engineering" ]
177
[ "Artificial intelligence engineering", "Machine learning" ]
78,210,098
https://en.wikipedia.org/wiki/Williams%20diagram
In combustion, Williams diagram refers to a classification diagram of different turbulent combustion regimes in a plane, having turbulent Reynolds number as the x-axis and turbulent Damköhler number as the y-axis. The diagram is named after Forman A. Williams (1985). The definition of the two non-dimensionaless numbers are where is the rms turbulent velocity flucturation, is the integral length scale, is the kinematic viscosity and is the chemical time scale. The Reynolds number based on the Taylor microscale becomes . The Damköhler number based on the Kolmogorov time scale is given by . The Karlovitz number is defined by . The Williams diagram is universal in the sense that it is applicable to both premixed and non-premixed combustion. In supersonic combustion and detonations, the diagram becomes three-dimensional due to the addition of the Mach number as the z-axis, where is the sound speed. Borghi–Peters diagram In premixed combustion, an alternate diagram, known as the Borghi–Peters diagram, is also used to describe different regimes. This diagram is named after Roland Borghi (1985) and Norbert Peters (1986). The Borghi–Peters diagram uses as the x-axis and as the y-axis, where and are the thickness and speed of the planar, laminar premixed flame. Since , where is the Prandtl number (set ), and in premixed flames, we have The limitations of the Borghi–Peters diagram are that (1) it cannot be used for non-premixed combustion and (2) it is not suitable for practically relevant cases where both and are increased concurrently, such as increasing nozzle radius while maintaining constant nozzle exit velocity. References Combustion Fluid dynamics
Williams diagram
[ "Chemistry", "Engineering" ]
371
[ "Piping", "Chemical engineering", "Combustion", "Fluid dynamics" ]
63,727,677
https://en.wikipedia.org/wiki/Arterial%20input%20function
Arterial input function (AIF), also known as a plasma input function, refers to the concentration of tracer in blood-plasma in an artery measured over time. The oldest record on PubMed shows that AIF was used by Harvey et al. in 1962 to measure the exchange of materials between red blood cells and blood plasma, and by other researchers in 1983 for positron emission tomography (PET) studies. Nowadays, kinetic analysis is performed in various medical imaging techniques, which requires an AIF as one of the inputs to the mathematical model, for example, in dynamic PET imaging, or perfusion CT, or dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). How is AIF obtained AIF can be obtained in several different ways, for example, using the invasive method of continuous arterial sampling using an online blood monitor, using the invasive method of arterial blood samples obtained at discrete time points post-injection, using a minimally invasive method using a population-based AIF where an input function in a subject is estimated partly from the prior information obtained from a previous population and partly from the blood information from the subject itself obtained at the time of scanning, or using an image-derived arterial input function (IDAIF) obtained by placing a region of interest (ROI) over an artery and calibrating the resulting curves against venous blood samples obtained during the later phases (30 to 60 minutes) of the dynamic scan when venous and arterial tracer concentrations become equal. A dynamic scan is a scan where two dimensional (2D) or three dimensional (3D) images are acquired again and again over a time-period forming a time-series of 2D/3D image datasets. For example, a dynamic PET scan acquired over a period of one hour contains the first few short image frames acquired for 5 seconds duration to capture the fast dynamics of the tracer immediately after a tracer-injection and later frames acquired for 30 seconds. Each data-point in the AIF curve represents a measurement of tracer-concentration from an artery obtained from each of these image time-frame acquired over time, with external corrections applied to it. These four methods are briefly described as follows: Continuous arterial sampling Continuous arterial blood sampling is invasive, painful, and uncomfortable for the patients. Continuous arterial sampling was obtained in postmenopausal women imaged using [18F]NaF for bone studies. Discrete arterial sampling Discrete arterial blood sampling is invasive, painful, and uncomfortable for the patients. Cook et al. measured discrete blood samples and compared them to continuous arterial sampling in postmenopausal women imaged using [18F]NaF for bone studies. Another study in head and neck cancer patients imaged using [18F]FLT PET, and numerous other studies, obtained discrete arterial samples for the estimation of arterial input function. The approach of obtaining discrete arterial sampling was based on the observation that the bolus peak occurs with 5 minutes after injection, and that the latter part of the curve, in most cases, represent a single or bi-exponential curve. It implied that continuous arterial sampling was not necessary, and the discrete arterial blood samples were enough to obtain the continuous curves using an exponential model fit. Population-based method A population-based input function generally relies on the dataset previously obtained by other researchers in a specific set of populations, and average values are used. The methods generally provide better results if a large number of datasets is used and based on the assumption that the input function in a new patient in this sub-group of the population will be insignificantly different from the population average values. In a neuroinflammation study, the author using a population-based input function in healthy volunteers and liver-transplanted patients imaged using [18F]GE-180 PET. In another study, healthy controls and patients with Parkinson's and Alzheimer's disease were imaged using [18F]FEPPA PET. Zanotti-Fregonara et al. thoroughly reviewed the literature on the arterial input function used for brain PET imaging and suggested the possibility of population-based arterial input functions as a potential alternative to invasive arterial sampling. However, Blake et al. derived a semi-population based method from healthy postmenopausal women imaged using [18F]NaF for bone studies based on the observation that the later part of the arterial input function can be constructed from the venous blood samples, as the venous and arterial blood concentration of tracer is equal 30 minutes after the injection. They derived the peak of the curve from a previous study that obtained continuous arterial sampling, and the later part of the curve from the venous blood samples of the individual patient in whom an AIF is to be estimated. When combined, a semi-population based arterial input function is obtained as a result. Image-derived method An image-derived arterial input function (IDAIF) obtained by measuring the tracer counts over the aorta, carodit artery, or radial artery offers an alternative to invasive arterial blood sampling. An IDAIF at the aorta can be determined by measuring the tracer counts over the left ventricle, ascending aorta, and abdominal aorta and this has been previously validated by various researchers. The arterial time-activity curve (TAC) from the image data requires corrections for metabolites formed over time, differences between whole blood and plasma activity, which are not constant over time, correction for partial volume errors (PVE) due to the small size of the ROI, spill-over errors due to activity from neighbouring tissues outside the ROI, error due to patient movement, and noise introduced due to the limited number of counts acquired in each image time frame because of the short time frames. These errors are corrected using late venous blood samples, and the resulting curve is called an arterial input function (AIF). There are numerous methods tried by researchers over the years. See also Positron emission tomography (PET) Time-activity curves (TAC) PET for bone imaging References Positron emission tomography Medical imaging Magnetic resonance imaging
Arterial input function
[ "Physics", "Chemistry" ]
1,243
[ "Antimatter", "Nuclear magnetic resonance", "Magnetic resonance imaging", "Positron emission tomography", "Matter" ]
63,729,730
https://en.wikipedia.org/wiki/Thulium%28III%29%20fluoride
Thullium(III) fluoride is an inorganic compound with the chemical formula TmF3. Production It can be produced by reacting thulium(III) sulfide and hydrofluoric acid, followed by thermal decomposition: 3 Tm2S3 + 20 HF + (2 + 2x)H2O → 2 (H3O)Tm3F10·xH2O↓ + 9 H2S↑ (x=1.7) (H3O)Tm3F10 → 3 TmF3 + HF↑ + H2O↑ Thulium(III) oxide reacts with fluorinating agents such as hydrogen fluoride, nitrogen trifluoride xenon difluoride to create thullium(III) fluoride as well, although the reaction with nitrogen trifluoride is incomplete and produces a mixture of TmOF and TmF3. References External links Does Boiling Water Remove Fluoride? Thulium compounds Fluorides Lanthanide halides
Thulium(III) fluoride
[ "Chemistry" ]
221
[ "Fluorides", "Salts" ]
63,731,225
https://en.wikipedia.org/wiki/Extensions%20of%20First%20Order%20Logic
Extensions of First Order Logic is a book on mathematical logic. It was written by María Manzano, and published in 1996 by the Cambridge University Press as volume 19 of their book series Cambridge Tracts in Theoretical Computer Science. Topics The book concerns forms of logic that go beyond first-order logic, and in particular (following the work of Leon Henkin) the project of unifying them by translating all of these extensions into a specific form of logic, many-sorted logic. Beyond many-sorted logic, its topics include second-order logic (including its incompleteness and relation with Peano arithmetic), second-order arithmetic, type theory (in relational, functional, and equational forms), modal logic, and dynamic logic. It is organized into seven chapters. The first concerns second-order logic in its standard form, and it proves several foundational results for this logic. The second chapter introduces the sequent calculus, a method of making sound deductions in second-order logic, and its incompleteness. The third continues the topic of second-order logic, showing how to formulate Peano arithmetic in it, and using Gödel's first incompleteness theorem to provide a second proof of incompleteness of second-order logic. Chapter four formulates a non-standard semantics for second-order logic (from Henkin), in which quantification over relations is limited to only the definable relations. It defines this semantics in terms of "second-order frames" and "general structures", constructions that will be used to formulate second-order concepts within many-sorted logic. In the fifth chapter, the same concepts are used to give a non-standard semantics to type theory. After these chapters on other types of logic, the final two chapters introduce many-sorted logic, prove its soundness, completeness, and compactness, and describe how to translate the other forms of logic into it. Audience and reception Although the book is intended as a textbook for advanced undergraduates or beginning graduate students, reviewer Mohamed Amer suggests that it does not have enough exercises to support a course in its subject, and that some of its proofs are lacking in detail. Reviewer Hans Jürgen Ohlbach suggests that it would be more usable as a reference than a textbook, and states that "it is certainly not suitable for undergraduates". Reviewer Yde Venema wonders how much of the logical power and useful properties of the various systems treated in this book have been lost in the translation to many-sorted logic, worries about the jump in computational complexity of automated theorem proving caused by the translation, complains about the book's clarity of exposition becoming lost in case analysis, and was disappointed at the lack of coverage of Montague grammar, fixed-point logic, and non-monotonic logic. Nevertheless, Venema recommends the book for courses introducing students to second-order and many-sorted logics, praising the book for its "overwhelming and catching enthusiasm". And reviewer B. Boričić calls it "nice and clearly written", "an appropriate introduction and reference", recommending it to researchers in several disciplines (mathematics, computer science, linguistics, and philosophy) where advanced forms of logic are important. References Mathematical logic Mathematics books 1996 non-fiction books
Extensions of First Order Logic
[ "Mathematics" ]
667
[ "Mathematical logic" ]
63,732,867
https://en.wikipedia.org/wiki/Andrea%20Cavalleri
Andrea Cavalleri (born 1969) is an Italian physicist who specializes in optical science and in condensed matter physics. He is the founding director of the in Hamburg, Germany and a professor of Physics at the University of Oxford. He was awarded the 2018 Frank Isakson Prize for his pioneering work on ultrafast optical spectroscopy applied to condensed matter systems. Scientific achievements Cavalleri is known for his application of light to create new states of matter, and especially for the use of terahertz and mid-infrared optical pulses to sculpt new crystal structures. The field of research pioneered by Cavalleri is sometimes referred to as non-linear phononics. He has shown that phononic control can be used to create new crystal structures with light, to induce hidden metallic states in oxides, induce ferroelectricity in dielectrics, manipulate magnetism and create non-equilibrium superconductivity at very high temperatures. Cavalleri has also been amongst the people who applied the first femtosecond X-ray pulses to condensed matter systems, for example in his studies of photo-induced phase transitions. Scientific career He received his laurea degree and PhD at the University of Pavia, as a student of Almo Collegio Borromeo, in 1994 and 1998 respectively. Cavalleri has held research positions at the University of Essen, the University of California, San Diego and the Lawrence Berkeley National Laboratory. In 2005, he joined the faculty of the University of Oxford, where he was promoted to Professor of Physics in 2006. He joined the Max Planck Society in 2008. Selected honors and awards He is a recipient of the 2004 European Science Foundation Young Investigator Award, of the 2015 Max Born Medal from the Institute of Physics (UK) and the Deutsche Physikalische Gesellschaft and of the 2015 Dannie Heineman Prize from the Göttingen Academy of Sciences. The American Physical Society awarded Cavalleri the 2018 Frank Isakson Prize for Optical Effects in Solids "for pioneering contributions to the development and application of ultra-fast optical spectroscopy to condensed matter systems, and providing insight into lattice dynamics, structural phase transitions, and the non-equilibrium control of solids". He is an elected fellow of the American Physical Society, the American Association for the Advancement of Science and the Institute of Physics (UK). He was elected a member of the Academia Europaea in 2017 and a fellow of the European Academy of Sciences in 2018. References External links Research homepage 21st-century Italian physicists 1969 births Living people Academics of the University of Oxford University of Pavia alumni Italian expatriates in the United States Italian expatriates in the United Kingdom Italian expatriates in Germany Condensed matter physicists Academic staff of the University of Hamburg Max Planck Society people Max Planck Institute directors Fellows of the American Physical Society Members of Academia Europaea
Andrea Cavalleri
[ "Physics", "Materials_science" ]
587
[ "Condensed matter physicists", "Condensed matter physics" ]
63,735,602
https://en.wikipedia.org/wiki/Prix%20Paul%20Doistau%E2%80%93%C3%89mile%20Blutet
The Prix Paul Doistau–Émile Blutet is a biennial prize awarded by the French Academy of Sciences in the fields of mathematics and physical sciences since 1954. Each recipient receives 3000 euros. The prize is also awarded quadrennially in biology. The award is also occasionally awarded in other disciplines. List of laureates Mathematics 1958 Marc Krasner 1980 Jean-Michel Bony 1982 Jean-Pierre Ramis 1982 Gérard Maugin 1985 Dominique Foata 1986 Pierre-Louis Lions 1987 Pierre Bérard 1987 Lucien Szpiro 1999 Wendelin Werner 2001 Hélène Esnault 2004 Laurent Stolovitch 2006 Alice Guionnet 2008 Isabelle Gallagher 2010 Yves André 2012 Serge Cantat 2014 Sébastien Boucksom 2016 Hajer Bahouri 2018 Physical sciences 2002 2005 Mustapha Besbes 2007 2009 Hasnaa Chennaoui-Aoudjehane 2011 Henri-Claude Nataf 2013 2015 Philippe André 2019 Integrative biology 2000 Jérôme Giraudat 2004 Marie-Claire Verdus 2008 Hélène Barbier-Brygoo 2012 Olivier Hamant Mechanical and computational science 2000 Annie Raoult 2002 Gilles Francfort 2002 Jean-Jacques Marigo 2006 Hubert Maigre 2006 Andreï Constantinescu 2008 Pierre Comte 2010 Nicolas Triantafyllidis 2012 Élisabeth Guazzelli 2014 Jacques Magnaudet 2019 Denis Sipp Other disciplines 1967 Jacques Blamont 1975 1976 Martial Ducloy 1976 Arlette Nougarède 1981 Christian Bordé 1988 2019 References Awards of the French Academy of Sciences Awards established in the 1950s Mathematics awards
Prix Paul Doistau–Émile Blutet
[ "Technology" ]
308
[ "Science and technology awards", "Mathematics awards" ]
69,419,151
https://en.wikipedia.org/wiki/Gold%28II%29%20sulfate
Gold(II) sulfate is the chemical compound with the formula or more correctly . This compound was previously thought to be a mixed-valent compound as AuIAuIII(SO4)2. But later, it was shown that it contained the diatomic cation, which made it the first simple inorganic gold(II) compound. The bond distance between the gold atoms in the diatomic cation is 249 pm. Production and properties Gold(II) sulfate is produced by reaction of sulfuric acid and gold(III) hydroxide. Gold(II) sulfate is unstable in air and oxidizes to hydrogen disulfoaurate(III)(). References Gold compounds Sulfates
Gold(II) sulfate
[ "Chemistry" ]
144
[ "Sulfates", "Salts" ]
69,421,101
https://en.wikipedia.org/wiki/Mariolina%20Padula
Mariarosaria (Mariolina) Padula (died 29 September 2012) was an Italian mathematical physicist specializing in fluid dynamics, including free boundary problems and compressible flow with viscosity. She was a professor of mathematical physics at the University of Ferrara, and is also known for revitalizing and heading the university's mathematical journal, Annali dell’Università di Ferrara, and forging it into an internationally known journal. Education and career Padula studied mathematics at the University of Naples Federico II, and published her first works in mathematical physics in 1973. She was a student of Salvatore Rionero. She continued at the University of Naples as an assistant and associate professor until 1994, when she won a professorship at the University of Basilicata. In 1995, she moved to the department of mathematics and computer science at the University of Ferrara, where she remained for the rest of her career. Book Padula was the author of a monograph on her research specialty, Asymptotic stability of steady compressible fluids (Lecture Notes in Mathematics 2024, Springer, 2011). Recognition A symposium on mathematical fluid dynamics in Padula's honor was held at the University of Ferrara in 2014. In the same year, a special issue of Annali dell’Università di Ferrara was published in her memory. Personal life Padula married another student of Rionero, Giovanni Paolo Galdi; they later separated. She was the mother of Giovanbattista Galdi, a professor of linguistics at Ghent University. References Year of birth missing 2012 deaths Italian mathematicians Italian women mathematicians Italian physicists Italian women physicists Fluid dynamicists University of Naples Federico II alumni Academic staff of the University of Naples Federico II Academic staff of the University of Ferrara
Mariolina Padula
[ "Chemistry" ]
357
[ "Fluid dynamicists", "Fluid dynamics" ]
66,508,801
https://en.wikipedia.org/wiki/KatG
KatG is an enzyme that functions as both catalase and peroxidase. In Mycobacterium tuberculosis, mutations in KatG are commonly associated with resistance to the antibiotic drug isoniazid, which targets the mycolic acids within M. tuberculosis, and more general multi-drug resistance. Due to both its catalase and peroxidase activity, this enzyme protects M. tuberculosis against reactive oxygen species. M. tuberculosis' survival within macrophages depends on the KatG enzyme. Reference Bacterial enzymes Drug resistance Molecular biology stubs Proteins
KatG
[ "Chemistry" ]
118
[ "Biomolecules by chemical classification", "Pharmacology", "Molecular and cellular biology stubs", "Drug resistance", "Biochemistry stubs", "Molecular biology stubs", "Molecular biology", "Proteins" ]
66,513,157
https://en.wikipedia.org/wiki/Oxygen%20reduction%20reaction
In chemistry, the oxygen reduction reaction refers to the reduction half reaction whereby O2 is reduced to water or hydrogen peroxide. In fuel cells, the reduction to water is preferred because the current is higher. The oxygen reduction reaction is well demonstrated and highly efficient in nature. Stoichiometry The stoichiometries of the oxygen reduction reaction, which depends on the medium, are shown: 4e− pathway in acid medium: O2 + 4 e- + 4H+ -> 2 H2O 2e− pathway in acid medium: O2 + 2e- + 2H+ -> H2O2 4e− pathway in alkaline medium: O2 + 4e- + 2H2O -> 4 OH- 2e− pathway in alkaline medium: O2 + 2e- + H2O -> HO2- + OH- 4e- pathway in solid oxide: O2 + 4e- -> 2 O^2- The 4e− pathway reaction is the cathode reaction in fuel cell especially in proton-exchange membrane fuel cells, alkaline fuel cell and solid oxide fuel cell. While the 2e− pathway reaction is often the side reaction of 4e- pathway or can be used in synthesis of H2O2. Catalysts Biocatalysts The oxygen reduction reaction is an essential reaction for aerobic organisms. Such organisms are powered by the heat of combustion of fuel (food) by O2. Rather than combustion, organisms rely on elaborate sequences of electron-transfer reactions, often coupled to proton transfer. The direct reaction of O2 with fuel is precluded by the oxygen reduction reaction, which produces water and adenosine triphosphate. Cytochrome c oxidase affects the oxygen reduction reaction by binding O2 in a heme–Cu complex. In laccase, O2 is engaged and reduced by a four-copper aggregate. Three Cu centers bind O2, and one Cu center functions as an electron donor. Heterogeneous catalysts In fuel cells, platinum is the most common catalyst. Because platinum is expensive, it is dispersed on a carbon support. Certain facets of platinum are more active than others. Coordination complexes Detailed mechanistic work results from studies on transition metal dioxygen complexes, which represent models for the initial encounter between O2 and the metal catalyst. Early catalysts for the oxygen reduction reaction were based on cobalt phthalocyanines. Many related coordination complexes have been tested. as the oxygen reduction reaction catalyst and different electrocatalysis performance was achieved by these small molecules. These exciting results trigger further research of the non-noble metal contained small molecules used for the oxygen reduction reaction electrocatalyst. Besides phthalocyanine, porphyrin is also a suitable ligand for metal center to provide N4 part in the M-N4 site. In biosystems, many oxygen related physical chemical reactions are carried by proteins containing the metal-prophyrin unit such as O2 delivery, O2 storage, O2 reduction and H2O2 oxidation. Recent development and modification Since the oxygen reduction reaction in fuel cells need to be catalyzed heterogeneously, conductive substrates such as carbon materials is always needed in constructing electrocatalysts. To increase the conductivity and enhance the substrate-loading interaction, thermal treatment is usually performed before application. During the treatment, M-N4 active sites turn to aggregate spontaneously due to the high intrinsic energy, which will dramatically decrease the active site density. Therefore, increasing the active site density and creating atomic level dispersed catalyst is a key step to improve the catalyst activity. To solve this problem, we can use some porous substrates to confine the active sites or use some defect or ligands to prevent the migration of the active site. In the mean time, the porous structure or the defect will also be beneficial to the oxygen absorption process. Besides active site density, the electron configuration of M center in M-N4 active site also plays an important role in the activity and stability of an oxygen reduction reaction catalyst. Because the electron configuration of M center can affects the redox potential, which determines the activation energy of the oxygen reduction reaction. To modulate the electron configuration, a simple way is to change the ligands of the metal center. For example, researchers found that whether the N atoms in M-N4 active sites are pyrrolic or pyridinic can affect the performance of the catalyst. Besides, some heteroatoms such as S, P other than N can also be used to modulate the electron configuration too, since these atoms have different electronegativity and electron configuration. References Electrochemistry Cellular respiration Oxygen Water Hydrogen peroxide
Oxygen reduction reaction
[ "Chemistry", "Biology", "Environmental_science" ]
973
[ "Hydrology", "Cellular respiration", "Water", "Electrochemistry", "Biochemistry", "Metabolism" ]
66,518,057
https://en.wikipedia.org/wiki/Open%20flow%20microperfusion
Open flow microperfusion (OFM) is a sampling method for clinical and preclinical drug development studies and biomarker research. OFM is designed for continuous sampling of analytes from the interstitial fluid (ISF) of various tissues. It provides direct access to the ISF by insertion of a small, minimally invasive, membrane-free probe with macroscopic openings. Thus, the entire biochemical information of the ISF becomes accessible regardless of the analyte's molecular size, protein-binding property or lipophilicity. OFM is capable of sampling lipophilic and hydrophilic compounds, protein bound and unbound drugs, neurotransmitters, peptides and proteins, antibodies, nanoparticles and nanocarriers, enzymes and vesicles. Method The OFM probes are perfused with a physiological solution (the perfusate) which equilibrates with the ISF of the surrounding tissue. Operating flow rates range from 0.1 to 10 μL/min. OFM allows unrestricted exchange of compounds via an open structure across the open exchange area of the probe. This exchange of compounds between the probe’s perfusate and the surrounding ISF is driven by convection and diffusion, and occurs non-selectively in either direction (Figure 1). The direct liquid pathway between the probe’s perfusate and the surrounding fluid results in collection of ISF samples. These samples can be collected frequently and are then subjected to bioanalytical analysis to enable monitoring of substance concentrations with temporal resolution during the whole sampling period. The concentric OFM probe (Figure 2) works according to the same principle. The perfusate is pumped to the tip of the OFM probe through the inner, thin lumen and exits beyond the Open Exchange Area, where it then mixes with exogenous substances present in the ISF before being withdrawn through the outer, thick lumen. History The first OFM sampling probe to be used as an alternative to microdialysis was described in an Austrian patent application filed by Falko Skrabal in 1987, where OFM was described as a device, which can be implanted into the tissue of living organisms. In 1992, a US patent was filed claiming a device for determining at least one medical variable in the tissue of living organisms. In a later patent by Helmut Masoner, Falko Skrabal and Helmut List a linear type of the sampling probe with macroscopic circular holes was also disclosed. Alternative and current OFM versions for dermal and adipose tissue application were developed by Joanneum Research, and were patented by Manfred Bodenlenz et al. Alternative materials featuring low absorption were used to enable manufacturing of probes with diameters of 0.55 mm and exchange areas of 15 mm in length. For cerebral application, special OFM probes were patented by Birngruber et al. Additionally, a patent was filed to manage the fluid handling of the ISF by using a portable peristaltic pump with a flow range of 0.1 to 10 μL/min that enables operation of up to three probes per pump. OFM System Two types of OFM probes are currently available: Linear OFM probes for implantation into superficial tissues such as skin (dermal OFM, dOFM) and subcutaneous adipose tissue (adipose OFM, aOFM) as well as concentric probes for implantation into various regions of the brain (cerebral OFM, cOFM). Areas of application OFM is routinely applied in pharmaceutical research in preclinical (e.g. mice, rats, pigs, primates) and in clinical studies in humans (Figure 3). OFM-related procedures such as probe insertions or prolonged sampling with numerous probes are well tolerated by the subjects. Dermal OFM (dOFM) dOFM (Figure 4) allows the investigation of transport of drugs in the dermis and their penetration into the dermis after local, topical or systemic application, and dOFM is mentioned by the U.S. Food and Drug Administration as a new method for assessment of bioequivalence of topical drugs. dOFM is used for: conduct tissue-specific pharmacokinetic (PK) and pharmacodynamic (PD) studies of drugs. perform head-to-head comparison of novel topical drug formulations assess dermal bioavailability. investigate high molecular weight compounds, e.g. antibodies Head-to-head settings with OFM have proven particularly useful for the evaluation of topical generic products, which need to demonstrate bioequivalence to the reference listed drug product to obtain market approval. Applications of dOFM include ex vivo studies with tissue explants and preclinical and clinical in vivo studies. Adipose OFM (aOFM) aOFM (Figure 4) allows continuous on-line monitoring of metabolic processes in the subcutaneous adipose tissue, e.g. glucose and lactate, as well as larger analytes such as insulin (5.9 kDa). The role of polypeptides for metabolic signaling (leptin, cytokine IL-6, TNFα) has also been studied with aOFM. aOFM allows the quantification of proteins (e.g. albumin size: 68 kDa) in adipose tissue and thus opens up the possibility to investigate protein-bound drugs directly in peripheral target tissues, such as highly protein-bound insulin analogues designed for a prolonged, retarded insulin action. Most recently, aOFM has been used to sample agonists to study obesity, lipid metabolism and immune-inflammation. Applications of aOFM include ex vivo studies with tissue explants and preclinical and clinical in vivo studies. Cerebral OFM (cOFM) cOFM (Figure 5) is used to conduct PK/PD preclinical studies in the animal brain. Access to the brain includes monitoring of the blood-brain barrier function and drug transport across the intact blood-brain barrier. cOFM allows taking a look behind the blood-brain barrier and assesses concentrations and effects of neuroactive substances directly in the targeted brain tissue. The blood-brain barrier is a natural shield that protects the brain and limits the exchange of nutrients, metabolites and chemical messengers between blood and brain. The blood-brain barrier also prevents potential harmful substances from entering and damaging the brain. However, this highly effective barrier also prevents neuroactive substances from reaching appropriate targets. For researchers that develop neuroactive drugs, it is therefore of major interest to know whether and to what extent an active pharmaceutical component can pass the blood-brain barrier. Experiments have shown that the blood-brain barrier has fully reestablished 15 days after implantation of the cOFM probe in the brain of rats. The cOFM probe has been specially designed to avoid a reopening of the blood-brain barrier or causing additional trauma to the brain after implantation. cOFM enables continuous sampling of cerebral ISF with intact blood-brain barrier cOFM and thus allows continuous PK monitoring in brain tissue. Quantification of ISF compounds ISF compounds can be quantified either indirectly from merely diluted ISF samples by using OFM and additional calibration techniques, or directly from undiluted ISF samples which can be collected with additional OFM methods. Quantification of compounds from diluted ISF samples requires additional application of calibration methods, such as Zero Flow Rate, No Net Flux or Ionic Reference. Zero Flow Rate has been used in combination with dOFM by Schaupp et al. to quantify potassium, sodium and glucose in adipose ISF samples. No Net Flux has been applied to quantify several analytes in OFM studies in subcutaneous adipose, muscle and dermal ISF: the absolute lactate concentrations and the absolute glucose concentrations in adipose ISF, the absolute albumin concentration in muscle ISF and the absolute insulin concentration in adipose and muscle ISF have been successfully determined. Dragatin et al. used No Net Flux in combination with dOFM to assess the absolute ISF concentration of a fully human therapeutic antibody. Ionic Reference has been used in combination with OFM to assess the absolute glucose concentration and the absolute lactate concentration in adipose ISF. Dermal OFM has also been used to quantify the concentrations of human insulin and an insulin analogue in the ISF with inulin as exogenous marker. Additional OFM methods, such as OFM recirculation and OFM suction can collect undiluted ISF samples from which direct and absolute quantification of compounds is feasible. OFM recirculation to collect undiluted ISF samples recirculates the perfusate in a closed loop until equilibrium concentrations between perfusate and ISF are established. Using albumin as analyte, 20 recirculation cycles have been enough to reach equilibrium ISF concentrations. OFM suction is performed by applying a mild vacuum, which pulls ISF from the tissue into the OFM probe. References External links Joanneum Research - HEALTH – Institute for Biomedicine and Health Sciences Medical research Biochemistry methods Cell biology Membrane technology Pharmacokinetics Pharmacodynamics Pharmacy Life sciences industry Drug discovery Clinical research Health research Food and Drug Administration drug development Medical devices Catheters
Open flow microperfusion
[ "Chemistry", "Biology" ]
1,941
[ "Biochemistry methods", "Pharmacology", "Cell biology", "Life sciences industry", "Drug discovery", "Separation processes", "Pharmacokinetics", "Pharmacodynamics", "Pharmacy", "Membrane technology", "Medical devices", "Medicinal chemistry", "Biochemistry", "Medical technology" ]
66,521,027
https://en.wikipedia.org/wiki/Vanadyl%20ribonucleoside
Vanadyl ribonucleoside is a potent transition-state analog of ribonucleic acid and potent inhibitor of many species of ribonuclease formed from a vanadium coordination complex and one ribonucleoside. Vanadium's [Ar] 3d3 4s2 electron configuration allows it to make five sigma bonds and two pi bonds with adjacent atoms. History RNA is notoriously unstable and vulnerable to ribonucleases, which has thus been an obstacle to the production and analysis of the cellular transcriptome. First referenced by Berger et al., the substance was used to prevent the digestion of RNA during isolation from white blood cells, and was rapidly adopted for such purposes as the acquisition of RNA from green beans. Production Vanadyl ribonucleoside is produced by combining vanadyl sulphate with various ribonucleosides (such as guanosine) in a 1:10 molar ratio. Use Vanadyl ribonucleoside, along with other RNase inhibitors, has been a staple of molecular biochemistry since its invention by allowing for the stability of RNA in its storage and use. References Ribonucleases Molecular biology
Vanadyl ribonucleoside
[ "Chemistry", "Biology" ]
251
[ "Biochemistry", "Molecular biology" ]
66,521,359
https://en.wikipedia.org/wiki/Cobalt%20ferrite
Cobalt ferrite is a semi-hard ferrite with the chemical formula of CoFe2O4 (CoO·Fe2O3). The substance can be considered as between soft and hard magnetic material and is usually classified as a semi-hard material. Uses It is mainly used for its magnetostrictive applications like sensors and actuators thanks to its high saturation magnetostriction (~200 ppm). CoFe2O4 has also the benefits to be rare-earth free, which makes it a good substitute for Terfenol-D. Moreover, its magnetostrictive properties can be tuned by inducing a magnetic uniaxial anisotropy. This can be done by magnetic annealing, magnetic field assisted compaction, or reaction under uniaxial pressure. This last solution has the advantage to be ultra fast (20 min) thanks to the use of spark plasma sintering. The induced magnetic anisotropy in cobalt ferrite is also beneficial to enhance the magnetoelectric effect in composite. Cobalt ferrite can be also used as electrocatalyst for oxygen evolution reaction and as material for fabricating electrodes for electrochemical capacitors (also named supercapacitors) for energy storage. These uses take advantage of the redox reactions occurring at the surface of the ferrite. Cobalt ferrite prepared with controlled morphology and size to enhance the surface area, and thus the number of active sites, has been published. One disadvantage of the cobalt ferrite for some applications is their low electrical conductivity. Nanostructures of cobalt ferrite with different shape can be synthesized on conducting substrates, such as reduced graphene oxide, to alleviate this disadvantage. See also Ferrite References Ceramic materials Ferromagnetic materials Ferrites Cobalt(II) compounds Iron(III) compounds
Cobalt ferrite
[ "Physics", "Engineering" ]
386
[ "Materials stubs", "Ferromagnetic materials", "Materials", "Ceramic materials", "Ceramic engineering", "Matter" ]
68,034,443
https://en.wikipedia.org/wiki/Mark%20Stockman
Mark Stockman (born Mark Ilyich Shtokman; July 21, 1947 – November 11, 2020) was a Soviet-born American physicist. He was a professor of physics and astronomy at Georgia State University. Best known for his contributions to plasmonics, Stockman has co-theorized plasmonic lasers, also known as spasers, in 2003. Biography Stockman was born on July 21, 1947, in Kharkiv, Ukrainian Soviet Socialist Republic to a Jewish family with Cantonist roots. His father, Ilya Stockman, was a mining engineer and a World War II veteran. After his father became a faculty member at the Dnepropetrovsk Higher Mining School, his family relocated to Dnepropetrovsk. Stockman completed his secondary education at the Republican Specialized Physics and Mathematics Boarding School in Kyiv. Following his graduation, he enrolled to Kyiv State University to study physics; he subsequently transferred to Novosibirsk State University, where he obtained his Master of Science in 1970. He completed his PhD at Institute of Nuclear Physics in Novosibirsk in 1974 under the supervision of Spartak Belyaev and Vladimir Zelelevinsky. His doctoral research work concentrated on nuclear physics. During his PhD, he met and married Branislava Mezger and had a son, Dmitriy, in 1978. No longer being interested in nuclear physics, Stockman transferred to the Institute of Automation and Electrometry in Novosibirsk following his PhD and started working on nonlinear optics as a research scientist under the supervision of Sergey Rautian. He habilitated in 1989, obtaining his DSc. In 1990, with the invitation of Thomas F. George, he left for the United States with his family to take a research position at University at Buffalo in Buffalo, New York. In the meanwhile, he held a visiting position at Washington State University. In 1996, he relocated to Atlanta, Georgia to join the faculty of Georgia State University, where he obtained his full professorship in 2001. Stockman also held visiting positions at Max Planck Institute of Quantum Optics, University of Stuttgart, École normale supérieure Paris-Saclay and ESPCI Paris. In 2012, Stockman founded the Center for Nano Optics at Georgia State University. He was a fellow member of the American Physical Society, The Optical Society and SPIE. He died on November 11, 2020, in Atlanta. Research Stockman's research concentrated in the field of nanoplasmonics; he was described as a pivotal figure in the research field. In 2003, alongside David J. Bergman, he theorized the plasmonic lasers known as "spasers," which involved the stimulated emission of localized surface plasmon via metallic nanoparticles instead of conventional optical cavities. They coined the acronym spaser as "surface plasmon amplification by stimulated emission of radiation." Stockman further developed the theory of spasers for optical amplification and ultrashort pulse generation. His research also focused on plasmonic hotspots in nanostructures and nanofocusing in adiabatic tapers, as well as ultrafast active plasmonics. Selected publications References Notes External links 1947 births 2020 deaths Scientists from Kharkiv 20th-century American physicists 21st-century American physicists Jewish American physicists Georgia State University faculty Optical physicists American nanotechnologists Fellows of Optica (society) Fellows of the American Physical Society Soviet physicists Soviet Jews Soviet emigrants to the United States Jewish Ukrainian scientists Jewish Russian physicists American people of Ukrainian-Jewish descent American people of Russian-Jewish descent Taras Shevchenko National University of Kyiv alumni Novosibirsk State University alumni American condensed matter physicists People from Atlanta Scientists from Georgia (U.S. state) Naturalized citizens of the United States Metamaterials scientists Laser researchers Fellows of SPIE 21st-century American Jews 20th-century American Jews
Mark Stockman
[ "Materials_science" ]
808
[ "Metamaterials scientists", "Metamaterials" ]
68,040,608
https://en.wikipedia.org/wiki/Regenerative%20medicine%20advanced%20therapy
Regenerative Medicine Advanced Therapy (RMAT) is a designation given by the Food and Drug Administration to drug candidates intended to treat serious or life-threatening conditions under the 21st Century Cures Act. A RMAT designation allows for accelerated approval based surrogate or intermediate endpoints. RMAT goes beyond breakthrough therapy features by allowing for accelerated approval of drugs based on surrogate endpoints. A surrogate endpoint is a biomarker that substitutes for a direct endpoint, such as clinical benefit. Legal background Section 3033 of the 21st Century Cures Act introduces section 506(g) of the Federal Food, Drug, and Cosmetic Act (FD&C Act) that allows for the designation of certain therapies as a 'regenerative medicine advanced therapy' (RMAT) (). Qualifying criteria In order to qualify for RMAT status, a treatment must meet the definition of a regenerative medicine therapy, intend to treat, modify, reverse or cure a serious condition, and be supported by preliminary clinical evidence that indicates the RMAT candidate can address the clinical need. A regenerative medicine therapy is defined in section 506(g)(8) of the FD&C Act to include cell therapies, therapeutic tissue engineering, human cell and tissue products. Under the FDA's interpretation, gene therapies and genetically modified cells that have a lasting effect, such as CAR-T antitumor therapies, may also qualify as regenerative medicine therapies. Effect A RMAT designation includes all benefits of the Fast Track and breakthrough therapy designations. In addition, it opens up early interactions between the FDA and sponsors to facilitate accelerated approval. In this context, accelerated approval means approval based on previously agreed-upon surrogate or intermediate endpoints, or data from a limited but meaningful number of sites. The ability to use 'Real World Evidence' (RWE), i.e. post-market evidence of safety and effectiveness, is particularly useful in the context of orphan diseases, where recruiting a sufficiently large cohort for pre-marketing clinical trials may not be feasible. RWE may include data from patient registries, clinical records and case studies. Where a RMAT's sponsor fails to comply with the requirements for accelerated approval, the RMAT designation and the benefits conferred by it can be withdrawn (). Examples Statistics In 2020, the FDA received 34 requests for RMAT status, of which 12 (35.3%) were granted. RMAT designated drugs include the novel CAR-T therapy Kymriah and betibeglogene autotemcel for beta thalassemia. As of 31 March 2021, 62 requests for RMAT status have been granted. More than half of the RMAT applications received by March 2019 involved autologous or allogeneic cell therapy products, including CAR-T therapies. See also Breakthrough therapy Advanced Therapy Medicinal Product (European Medicines Agency equivalent) Sakigake (Japanese equivalent) Orphan drug References Pharmacy Pharmaceuticals policy Food and Drug Administration
Regenerative medicine advanced therapy
[ "Chemistry" ]
618
[ "Pharmacology", "Pharmacy" ]
68,042,075
https://en.wikipedia.org/wiki/Yttrium%20oxalate
Yttrium oxalate is an inorganic compound, a salt of yttrium and oxalic acid with the chemical formula Y(CO). The compound does not dissolve in water and forms crystalline hydrates—colorless crystals. Synthesis Precipitation of soluble yttrium salts with oxalic acid: Properties Yttrium oxalate is highly insoluble in water and converts to the oxide when heated. Yttrium oxalate forms crystalline hydrates (colorless crystals) with the formula Y(CO)•n HO, where n = 4, 9, and 10. Decomposes when heated: The solubility product of yttrium oxalate at 25 °C is 5.1 × 10−30. The trihydrate Y(CO)•3HO is formed by heating more hydrated varieties at 110 °C. Y(CO)•2HO, which is formed by heating the decahydrate at 210 °C) forms monoclinic crystals with unit cell dimensions a=9.3811 Å, b=11.638 Å, c=5.9726 Å, β=96.079°. Related Several yttrium oxalate double salts are known containing additional cations. Also a mixed-anion compound with carbonate is known. References Inorganic compounds Yttrium compounds Oxalates
Yttrium oxalate
[ "Chemistry" ]
278
[ "Inorganic compounds" ]
72,429,750
https://en.wikipedia.org/wiki/Laura%20I.%20Gomez
Laura I. Gómez is a computer scientist known for establishing Atipca, a company that presents bias free names in recruiting. Early life and education Gómez was born in León Guanajuato, México and then moved to California when she was eight years old. Gomez got her first software engineering internship at the age of seventeen, when she got an internship working at Hewlett-Packard after she received a work permit. For college, she earned a Bachelor of Human Development and Family Studies from University of California Berkeley and a Master of Latin American Studies from University of California San Diego. Career Gomez worked with several start-ups and big technology companies, including YouTube, Google, and Twitter. She was one of the early employees at Twitter, and her work there centered on bringing Spanish into the user interface. Gomez has also discussed the use of social media as a means to practice as people learn a new language. Gomez was a founding member of a project known as Project Include, a non-profit led by Ellen Pao that advocates for inclusion in the technology field. Project Interlude funded Gomez's start-up, Atipica, an organization which provides artificial and human intelligence to sort job candidates in a manner that reduces bias. Over time, Atipica was backed by Kapor Capital, Precursor Ventures, and True Ventures. One of the perks provided by Atipica is paid time off for employees supporting a political cause. The funding Gomez raised for Atipca was the largest financing level for a Latinx founder in Silicon Valley. As of 2023, Gomez was working on Proyecto Solace, a mental health initiative for Latinx peoples. Awards and honors Gomez was recognized by the Department of State and Former Secretary of State, Hillary Clinton, for her work in the TechWomen Program. References Living people Computer scientists University of California, Berkeley alumni University of California, San Diego alumni Year of birth missing (living people)
Laura I. Gomez
[ "Technology" ]
391
[ "Computer science", "Computer scientists" ]
72,430,683
https://en.wikipedia.org/wiki/Main-group%20element-mediated%20activation%20of%20dinitrogen
Main-group element-mediated activation of dinitrogen is the N2 activation facilitated by reactive main group element centered molecules (e.g., low valent main group metal calcium, dicoordinate borylene, boron radical, carbene, etc.). Background Dinitrogen fixation is essential for human life. Currently, the industry uses the Haber–Bosch process to convert N2 and H2 to NH3 based on the metal catalysis under very high pressure and temperature conditions. Alternative strategies that realize the transformation from N2 to NH3 under mild conditions are a long-lasting goal in chemistry. In the past decades, a number of transition-metal species have been found to bind (and even functionalize) N2. The prevalence of transition metals in dinitrogen activation is attributed to the fact that the unoccupied and occupied d orbitals could be both energetically and symmetrically accessible to accept electron density from and back donate to N2. Nevertheless, the development of low-valent, low-coordinate main-group elements which mimic the electronic properties of transition metal provides more opportunities to unearth the N2 activation by main group elements. Lithium can also react with N2 at room temperature to give an isolable product Li3N. However, it was until recently that the controllable, stepwise N2 activation by main group element began to thrive, especially for those whose key intermediates were well structurally characterized and even isolated. N2 activation by calcium In 2021, Harder et al. achieved dinitrogen activation by a low-valent calcium complex, which was generated by the reduction of a calcium (II) complex [CaI(BDI)]2. With the presence of THF, the reduction of [CaI(BDI)]2 with K/KI could afford red-brown crystals. The single crystal X-ray analysis revealed a centrosymmetric dimer with terminal BDI ligands and side-on bridging N2 units. The N-N distance in complex (1.258(3) and 1.268(3) Å) is remarkably longer than that of dinitrogen triple-bond (1.098 Å) and comparable with N=N double bond character in N22-. The N22ˉ anion could also be protonated to diazene (N2H2) with the intramolecular deprotonation of THF under the heating condition. N2 activation by boron Dicoordinate borylene has a filled p orbital and an empty sp-hybridized orbital in appropriate symmetry that can interact with inert small molecules like dinitrogen. In 2018, Braunschweig et al. reported the nitrogen fixation and reduction by active borylene species. [(CAAC)BDurBr2] could smoothly undergo one-electron reduction with the limited amount of KC8 (1.5 equiv.) and afford a radical complex [(CAAC)BDurBr]·. The radical complex could be further reduced, forming the transient dicoordinate borylene species and thus had the ability to activate dinitrogen. The filled p orbital of borylene, which acted as a Lewis base, donated to the π* antibonding orbital of N2. The empty sp2 orbital, which acted as a Lewis acid, accepted the electrons from N2 through σ donation. Following the further reduction by KC8 and stabilization by another borylene molecule, the dipotassium complex {[(CAAC)DurB]2(μ2-N2K2)} was formed in crystalline solid. Exposure of the dipotassium complex with ambient air and distilled water leads to the formation of dinitrogen bis(borylene) compound {[(CAAC)DurB]2(μ2-N2)} and a paramagnetic diradical complex {[(CAAC)DurB]2(μ2 -N2H2)}. Further protonation and reduction of {[(CAAC)DurB]2(μ2 -N2H2)} could lead to the cleavage of central N-N bond, which could finally lead to the formation of ammonium chloride in one-pot reaction. Repeating the same reaction but replacing Dur (2,3,5,6-tetramethyl-phenyl) group by a bulkier Tip (2,4,6-triisopropylphenyl) group resulted in a very different result: after the dinitrogen was coordinated by the first borylene molecule, the second coordination by another borylene molecule was considerably hindered by steric repulsion in the case of the bulkier 4-Tip. Instead, the reductive dimerization of transient borylene [(CAAC)BTip] could occur in the presence of extra KC8, affording the complex {[(CAAC)-TipB]2(μ2-N4K2)}, a product with catenation of two N2 molecules, forming a N4 chain. It should be mentioned that this kind of coupling reaction was never found in the transition-metal-mediated N2 activation processes. For borylene molecules, two-electron-filled p orbital and vacant sp2 orbital provide two push–pull channels to activate dinitrogen. Similarly, for boron radicals, one-electron-filled p orbital and vacant sp2 orbital provide two channels to activate N2. In 2022, Mézailles et al. reported the N2 activation by in situ generated boron-centered radicals. Though key intermediate which activated N2 is unclear, DFT calculation suggested that the coordination of N2 occurs prior to the second chloride elimination. Following the further reduction and coordination of boron, N2 was finally reduced to its lowest oxidation state and a mixture of two borylamine compounds, N(BCy2)3 and NH(BCy2)2, were generated. N2 activation by carbon Carbene species have also been considered a good choice to activate N2. The decomposition of diazoalkanes with the release of N2 is one of the most widely used strategies to produce carbenes. Its reverse reaction could be considered as the activation of N2 with carbenes. For example, in 1992, Dailey et al. reported that the photolysis of 3-bromo-3-(trifluoromethyl)diazirines in an argon matrix could afford bromo(trifluoromethyl)carbene. Bromo(trifluoromethyl)carbene could rebound N2 photochemically in matrix to form the corresponding diazo compound. References Chemical reactions
Main-group element-mediated activation of dinitrogen
[ "Chemistry" ]
1,398
[ "nan" ]
72,439,008
https://en.wikipedia.org/wiki/Membrane-mediated%20anesthesia
Membrane-mediated anesthesia or anaesthesia (UK) is a mechanism of action that involves an anesthetic agent exerting its pharmaceutical effects primarily through interaction with the lipid bilayer membrane. The relationship between volatile (inhalable) general anesthetics and the cellular lipid membrane has been well established since around 1900, based on the Meyer-Overton Correlation. Since 1900 there have been extensive research efforts to characterize these membrane-mediated effects of anesthesia, leading to many theories but few answers. During the 1980s the focus of anesthetic research shifted from membrane lipids to membrane proteins, where it currently remains. Accordingly, the specific membrane-mediated anesthetic effects remain mostly undiscovered. Recent research has demonstrated promising mechanisms of membrane-mediated anesthetic action for both general and Local anesthetics. These studies suggest that the anesthetic binding site in the membrane is within ordered lipids. This binding disrupts the function of the ordered lipids, forming lipid rafts that dislodge a membrane-bound phospholipase involved in a metabolic pathway that actives anesthetic-sensitive potassium channels. Other recent studies show similar lipid-raft-specific anesthetic effects on sodium channels. See Theories of general anaesthetic action for a broader discussion of purely theoretical mechanisms. The Meyer-Overton Correlation for Anesthetics At the turn of the twentieth century, one of the most important anesthetic-based theories began to take shape. At the time, the research of both German pharmacologist Hans Horst Meyer (1899) and British-Swedish physiologist Charles Ernest Overton (1901) reached the same conclusion about general anesthetics and lipids: There is a direct correlation between anesthetic agents and lipid solubility. The more lipophillic the anesthetic agent is, the more potent the anesthetic agent is. This principle became known as the Meyer-Overton Correlation. It originally compared the anesthetic partition coefficient in olive oil (X-axis) to the effective dose that induced anesthesia in 50% (i.e., EC50) of the tadpole research subjects (Y-axis). Modern renditions of the Meyer-Overton plot usually compare olive oil partition coefficient of the Inhalational or Intravenous drug (X-axis) to the minimum alveolar concentration (MAC) or the effective dose 50 (i.e., ED50) of the anesthetic agent (Y-axis). Despite more than 175 years of anesthetic use and research, the exact connection between phospholipids, the bilayer membrane, and general anesthetic agents remains mostly unknown. Accordingly, the means of membrane-mediated anesthesia remain mostly theoretical. The Lateral Pressure Profile Theory The Lateral Pressure Profile theory suggests that anesthetic agents partition into the lipid bilayer, increasing the horizontal (lateral) pressure on proteins imbedded in the membrane. The added pressure causes a conformational change in protein structure, forcing the neuronal channel into an open or closed state (e.g., hyperpolarization) that generates the Inhibitory state of general anesthesia in the central nervous system (CNS). This is the first hypothesis to explain the correlations of anesthetic potency with lipid bilayer structural characteristics, describing both mechanistic and thermodynamic rationale for the effects of general anesthesia. General anesthetics Inhaled anesthetics partition into the membrane and disrupt the function of ordered lipids. Membranes, like proteins, are composed of ordered and disordered regions. The ordered region of the membrane contains a palmitate binding site that drives the association of palmitoylated proteins to clusters of GM1 lipids (sometimes referred to as lipid rafts). Palmitate's binding to lipid rafts regulates the affinity of most proteins to lipid rafts. Inhaled anesthetics partition into the lipid membrane and disrupt the binding of palmitate to GM1 lipids (see figure). The anesthetic binds to a specific palmitate site nonspecifically. The clusters of GM1 lipids persist, but they lose their ability to bind palmitoylated proteins. PLD2 Phospholipase D2 (PLD2) is a palmitoylated protein that is activated by substrate presentation. Anesthetics cause PLD2 to move from GM1 lipids, where it lacks access to its substrate, to a PIP2 domain which has abundant PLD2 substrate. Animals with genetically depleted PLD2 were significantly resistant to anesthetics. The anesthetics xenon, chloroform, isofluorane, and propofol all activate PLD in cultured cells. TREK-1 Twik-related potassium channel (TREK-1) is localized to ordered lipids through its interaction with PLD2. Displacement of the complex from GM1 lipids causes the complex to move to clusters. The product of PLD2, phosphatidic acid (PA) directly activates TREK-1. The anesthetic sensitivity of TREK-1 was shown to be through PLD2, and the sensitivity could be transferred to TRAAK, an otherwise anesthetic insensitive channel. GABAAR The membrane mediated mechanism is still being investigated. Nonetheless, the GABAAR gamma subunit is palmitoylated and the alpha subunit binds to PIP2. When the agonist GABA binds to GABAAR it causes a translocation to thin lipids near PIP2. Anesthetic disruption of Palmitate mediated localization should therefore cause the channel to move the same as an agonist, but this has not yet been confirmed. Endocytosis Endocytosis helps regulate the time an ion channel spends on the surface of the membrane. GM1 lipids are the site of endocytosis. The anesthetics hydroxychloroquine, tetracaine, and lidocaine blocked entry of palmitoylated protein into the endocytic pathway. By blocking access to GM1 lipids, anesthetics block access to endocytosis through a membrane-mediated mechanism. Local anesthetics Local anesthetics disrupt ordered lipid domains and this can cause PLD2 to leave a lipid raft. They also disrupt protein interactions with PIP2. History More than 100 years ago, a unifying theory of anesthesia was proposed based on the oil partition coefficient. In the 70s this concept was extended to the disruption of lipid partitioning. Partitioning itself is an integral part of forming the ordered domains in the membrane, and the proposed mechanism is very close to the current thinking, but the partitioning itself is not the target of the anesthetics. At clinical concentration, the anesthetics do not inhibit lipid partitioning. Rather they inhibit the order within the partition and/or compete for the palmitate binding site. Nonetheless, several of the early conceptual ideas about how disruption of lipid partitioning could affect an ion channel have merit. References Anesthesia Biological matter Drugs Drugs with unknown mechanisms of action GABAA receptor positive allosteric modulators General anesthetics History of anesthesia Local anesthetics Membrane biology NMDA receptor antagonists
Membrane-mediated anesthesia
[ "Chemistry" ]
1,528
[ "Pharmacology", "Products of chemical industry", "Membrane biology", "Molecular biology", "Chemicals in medicine", "Drugs" ]
72,440,309
https://en.wikipedia.org/wiki/Littrow%20expansion
Littrow expansion and its counterpart Littrow compression are optical effects associated with slitless imaging spectrographs. These effects are named after Austrian physicist Otto von Littrow. In a slitless imaging spectrograph, light is focused with a conventional optical system, which includes a transmission or reflection grating as in a conventional spectrograph. This disperses the light, according to wavelength, in one direction; but no slit is interposed into the beam. For pointlike objects (such as distant stars) this results in a spectrum on the focal plane of the instrument for each imaged object. For distributed objects with emission-line spectra (such as the Sun in extreme ultraviolet), it results in an image of the object at each wavelength of interest, overlapping on the focal plane, as in a spectroheliograph. Description The Littrow expansion/compression effect is an anamorphic distortion of single-wavelength image on the focal plane of the instrument, due to a geometric effect surrounding reflection or transmission at the grating. In particular, the angle of incidence and reflection from a flat mirror, measured from the direction normal to the mirror, have the relation which implies so that an image encoded in the angle of collimated light is reversed but not distorted by the reflection. In a spectrograph, the angle of reflection in the dispersed direction depends in a more complicated way on the angle of incidence: where is an integer and represents spectral order, is the wavelength of interest, and is the line spacing of the grating. Because the sine function (and its inverse) are nonlinear, this in general means that for most values of and , yielding anamorphic distortion of the spectral image at each wavelength. When the magnitude is larger, images are expanded in the spectral direction; when the magnitude is smaller, they are compressed. For the special case where the reflected ray exits the grating exactly back along the incident ray, and ; this is the Littrow configuration, and the specific angle for which this configuration holds is the Littrow angle. This configuration preserves the image aspect ratio in the reflected beam. All other incidence angles yield either Littrow expansion or Littrow compression of the collimated image. References Spectroscopy
Littrow expansion
[ "Physics", "Chemistry" ]
455
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
72,442,626
https://en.wikipedia.org/wiki/Iron%20superoxide%20dismutase
Iron superoxide dismutase (FeSOD) is a metalloenzyme that belongs to the superoxide dismutases family of enzymes. Like other superoxide dismutases, it catalyses the dismutation of superoxides into diatomic oxygen and hydrogen peroxide. Found primarily in prokaryotes such as Escherichia coli and present in all strict anaerobes, examples of FeSOD have also been isolated from eukaryotes, such as Vigna unguiculata. Found within the cytosol, mitochondria, and chloroplasts, FeSOD's ability to disproportionate superoxides provides cells with protection against oxidative stress and other processes that produce superoxides such as photosynthesis. It is important for organisms to disproportionate superoxides, as superoxides themselves are not particularly harmful but have the potential to turn into a hydroxyl radical, which is unable to be eliminated in an enzymatic reaction. History FeSOD was first isolated from E. coli by Yost et al. in 1973 and was the third discovery in the family of bacterial superoxide dismutases, with copper-zinc superoxide dismutase being discovered in 1969 and FeSOD's structural equivalent, manganese superoxide dismutase (MnSOD), being discovered in 1970. The fourth, nickel superoxide dismutase, was first isolated in 1996. Along with being one of the oldest enzymes known, FeSOD is the oldest known superoxide dismutase due to the high bioavailability of iron during the Archean eon. FeSOD first appeared in photoferrotrophic bacteria, then later in cyanobacteria as the Great Oxidation Event locked up much of the free iron in iron oxides and increased the need for cyanobacteria to have reactive oxygen species defences. Structure FeSOD is a structural homolog of MnSOD, although there are minor differences in eukaryotic FeSOD, such as a loop connecting the β1 and β2 strands within the enzyme. FeSOD can also exist in homodimeric or homotetrameric forms, depending on the organism. Mechanism Like its structural homolog MnSOD, FeSOD disproportionates superoxide via the transport of a single electron by the Fe2+/Fe3+ redox couple. There are two separate reactions by which FeSOD can process superoxide: Fe3+-SOD + → Fe2+-SOD + O2 Fe2+-SOD + + 2H+ →Fe3+-SOD + H2O2 In order for the superoxide to be disproportionated, however, it must first be protonated. The delivery of the proton is believed to be an H2O ligand, the transport of which is mediated by a local glutamine from ambient water within the cell. References Metalloproteins Iron enzymes
Iron superoxide dismutase
[ "Chemistry" ]
639
[ "Metalloproteins", "Bioinorganic chemistry" ]
76,868,055
https://en.wikipedia.org/wiki/Landau%20derivative
In gas dynamics, the Landau derivative or fundamental derivative of gas dynamics, named after Lev Landau who introduced it in 1942, refers to a dimensionless physical quantity characterizing the curvature of the isentrope drawn on the specific volume versus pressure plane. Specifically, the Landau derivative is a second derivative of specific volume with respect to pressure. The derivative is denoted commonly using the symbol or and is defined by where Alternate representations of include For most common gases, , whereas abnormal substances such as the BZT fluids exhibit . In an isentropic process, the sound speed increases with pressure when ; this is the case for ideal gases. Specifically for polytropic gases (ideal gas with constant specific heats), the Landau derivative is a constant and given by where is the specific heat ratio. Some non-ideal gases falls in the range , for which the sound speed decreases with pressure during an isentropic transformation. See also Landau damping References Fluid dynamics
Landau derivative
[ "Chemistry", "Engineering" ]
194
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
76,874,340
https://en.wikipedia.org/wiki/Aliceevansviridae
Aliceevansviridae is a family of viruses in the class Caudoviricetes. The family was announced in 2022 and contains three genera (Brussowvirus, Moineauvirus, and Vansinderenvirus), along with several species unassigned to a genus, including SMHBZ8. Etymology The family's name, Aliceevans, is in honor of Alice Catherine Evans (1881–1975), an American microbiologist, the suffix -viridae is the standard suffix for virus families. References Virus families
Aliceevansviridae
[ "Biology" ]
117
[ "Virus stubs", "Viruses" ]
76,875,263
https://en.wikipedia.org/wiki/Honda%20V6%20hybrid%20Formula%20One%20power%20unit
The Honda RA6xxH/RBPTH hybrid power units are a series of 1.6-litre, hybrid turbocharged V6 racing engines which feature both a kinetic energy recovery (MGU-K) electric motor directly geared to the crankshaft and a heat energy recovery (MGU-H) electric motor attached via a common shaft to the turbocharger assembly. Developed and produced by Honda Motor Company (and subsequently under their Honda Racing Corporation organisation from 2022) for use in Formula One. The engines have been in use since the 2015 Formula One season, initially run by the then newly re-established McLaren Honda works team. Over years of development, power unit output was increased from approximately 760 to over 1,000 horsepower while utilising the same amount of fuel, as mandated by enforced technical regulations (Fuel Mass Flow Rate limit of 100kg per hour). Teams utilising the engines over the years include McLaren, Scuderia Toro Rosso, Scuderia Alpha Tauri, RB Formula One Team, Red Bull Racing. List of models *The engine development freeze was enforced from 2022 in efforts to reduce spending. Power unit performance specifications mounted to cars in first session running of the 2022 season were then locked in until the end of the 2025 season and no performance upgrades could be introduced. However, manufacturers are allowed to bring reliability upgrades during the freeze. ** System output estimates are final readings at season end. History The Formula One hybrid engine regulations being introduced in had enticed Honda to make a return as an engine supplier due to the advanced technical challenge and environmentally focused direction. Having planned to enter in the 2016 season in a works partnership with McLaren, then McLaren CEO Ron Dennis, had pushed for Honda to fast track their debut to the 2015 season as their current contract with Mercedes was expiring. Honda had decided to accept the early entry believing they were well on target with their Power Unit concept. Having entered Formula One in the 2015 season, one year earlier than initially planned, and experiencing difficulty for the first few years with regards to performance and reliability, primarily due to underestimating the technical challenge and being out of the Formula One world for over 7 years, the Honda V6 hybrid engine experienced a stratospheric developmental rise having started as an unreliable and underpowered design to becoming a world championship winning success. Becoming the first manufacturer to win a Formula 1 race with two different teams in the V6 hybrid era, as well as among many other major constructor and driver F1 records. Notably, Max Verstappen and Red Bull Racing, with a record breaking 19 wins in the 2023 Formula One World Championship, beating his own record of 15 the year prior, which itself beat the previous record held by McLaren Honda in the 1980s. Power units (2015–2025) RA615H The RA615H, was Honda's first design for use in the V6 hybrid F1 era, debuting in the 2015 Formula One season powering the McLaren MP4-30. It was highly unique compared to the designs of rivals Renault, Mercedes and Ferrari who had already debuted power units the season prior, and as such, Honda felt its best chance to make up for lost development time was to go aggressive and radical. The primary focus points of the unit, at the request of McLaren, were extremely compact dimensions and high operating temperature capability that could function with reduced cooling requirements as to aid aerodynamic performance and centre of gravity targets. The unit's turbocharger assembly was a compact but complex axial compressor arrangement with the MGU-H fitted between the turbine and compressor housings, all mounted within the vee of the engine in its entirety. This allowed for a significantly shorter engine compared to the Renault, Ferrari and Mercedes concepts whereby their compressor assemblies all protruded from either ends of the block in varying formats. The induction system, which includes the inlet, filters, intake plenum and variable inlet runners, were all small and ornate in design to fit under the tight bodywork dimensions. This made the entire system compact, but also very complex. The exhaust manifold was a small "log" type design (where one pipe housed ports for each cylinder on either bank), this was very beneficial to packaging requirements and allowed significantly tighter bodywork. The result was an engine that proved to be by far the smallest on the field, earning the nickname, "The Size-Zero", from McLaren, as it gave them the freedom to be aggressive with the body aerodynamics resulting in extremely tight packaging and efficient bodywork with the goal to make significant gains in this area over the competition. After debuting publicly for the first time in the MP4-30 at the 2015 pre-season test at Jerez, rumours began circulating that the engine was extremely unreliable, heavy on fuel use and significantly down on power. This became evident with the very limited amount of circuit running McLaren Honda could do due to various issues with the engine appearing. It was quickly discovered that although the compact nature of the turbocharger assembly was beneficial to packaging, it proved to be vastly undersized and as such, had poor air compression capabilities which resulted in significantly less power and combustion efficiency potential. The intake system, while also compact, its complexity and compromise in ideal shape, orientation and size also proved too detrimental to performance and reliability. These all had further knock on effects to the hybrid energy side of the power unit, having poor regeneration capabilities. Honda's lack of experience in programming and controlling such a complex power unit made identifying specific issues difficult and time consuming. Additionally, the compact nature of the entire concept seriously hindered the thermal management and vibration severity of the engine, often causing various components to overheat, specifically the MGU-H, with uncontrolled resonances and vibration which led to constant hybrid system failures and terminal engine damage. Honda maintained these were early teething issues, only discovered after mounting the engine to a moving chassis under racing loads for the first time and they would quickly sort these out. However as the season progressed, reliability issues and the ability to even complete a race distance became such a concern that the engines, already down on power, needed to be run in a reduced power state to lower thermal load in attempt to increase longevity, this severely compromised overall car performance in comparison to the competition. The restrictive "development token" system F1 used at the time slowed development significantly, limiting what developments could be brought forward in season at any given time leading to longer setbacks. Honda's lack of experience and data with the new regulations, fundamental issues with the "Size Zero" engine concept, while also self-admittedly entering the championship too early, were reasons given by Honda themselves for the lacklustre performance. RA616H The RA616H made its racing debut in the McLaren MP4-31 in the 2016 Formula One season. This engine, still largely following the "Size Zero" concept of the RA615H, had significant developments in an effort to increase both power and reliability. After conversations with McLaren, it was agreed to allocate more space for the power unit, as such, the engine grew considerably in size. The entire induction system was reworked and raised higher, this allowed their "within vee" compressor assembly to be raised higher which allowed more space for a larger version, this provided more power and also improved heat regeneration for the hybrid system, although was still restricted to mounting within the vee of the engine. The new taller inlet plenum had more space, so larger and more refined inlet runners were developed to better feed the engine and was able to be simplified slightly, so reliability for the variable intake system increased. The exhaust manifold was completely reworked with equal length individual runners which were larger, but far more efficient. The MGU-K had a newer, more efficient magnet assembly which improved battery regeneration. The MGU-H also had revised magnets which were much more heat resistant to help improve reliability issues. Overall, Honda had made as many changes to the engine as were allowed by the "development token" regulation still imposed by Formula One. Honda made clear that they were aware of the fundamental design restraints the current power unit concept had, but to properly address these issues, changes would need to be made to the token system to allow increased development speed in season. The engine proved to be much more reliable and McLaren had a far better season, suffering from less engine related retirements and while still lagging behind, were more competitive in terms of outright power. However, there were still fundamental issues that were encountered, with MGU-H reliability, although improved, still far from acceptable levels for both McLaren and Honda themselves. The engine was improved further over the season to try and extract as much potential from the current architecture as possible and aside from reaching a relatively stable and reliable point towards the end of the season, Honda believed the current architecture was at its limit and would need drastic changes to move forward. Following the issues faced by newcomer Honda and some of the other manufacturers when compared to the class-leading Mercedes units, F1 announced the scrapping of the development token system for the 2017 season onwards, allowing for significantly faster power unit development in the hopes the competition would begin to level out. RA617H The RA617H made its racing debut in the 2017 Formula One season powering the McLaren MCL32. With the scrapping of the development token system, Honda moved to make a radical change, no longer restricted by the token system. The result was the RA617H, a completely new design with the most notable change being the reworked turbocharger/MGU-H setup. The new design now split the compressor and turbine housing and mounted each half on either end of the cylinder block protruding out of the vee. The MGU-H remained in the centre of the vee and the entire assembly was connected via one shaft. This increased the length of the engine, however it also allowed a significant lowering of the turbo/MGU-H setup and a greatly increased compressor and turbine size (no longer physically restricted by the bank angle within the vee), this then allowed the induction system above to also be lowered dramatically, which resulted in a much lower height engine with a vastly improved centre of gravity. Its design was totally overhauled, revising the orientation of the variable runners to a longitudinal position, whereas previously they were horizontally placed, opposing each other. This greatly improved the extension range they could operate which improved performance across a much broader RPM range while also simplifying the system which brought improved reliability. In discussion with McLaren, the MGU-K had its geartrain position reversed with a new structure and was now mounted further forward on the PU to better accommodate packaging and bodywork aero design McLaren were pursuing, this also provided a weight reduction and increased reliability. The MGU-H was significantly overhauled along with the new compressor system, now housing higher performance magnets to improve the flux field which improved battery regeneration performance. The combustion system was also completely overhauled, now utilising an experimental system known as Turbulent Jet Ignition or Pre-Chamber Ignition, which greatly increased power and efficiency potential, along with various other internal material changes. All these new technologies being implemented into one engine meant Honda admitted the entire unit was so experimental that it was a "high risk" move and would take time to realise the full potential of several aspects, but one they believed will ultimately provide much greater performance. As pre-season testing began at Catalunya in Spain, fundamental issues with the power unit were found. The oil tank, which was previously mounted at the front of the engine, was now physically obstructed by the new compressor position and as such was re-designed with an unorthodox shape surrounding the compressor which, after initial running in the new McLaren MCL32 and being exposed to such high-G loads for the first time, was found to cause the oil flow to became unpredictable and restricted, with the engine often losing oil pressure in high load situations. Honda quickly identified the issue and set to work to create a new design tank to combat this, however it would not be ready until the second week of testing. The reasoning put forward for the issue was incorrect fluid dynamic simulation estimates of the lateral loads the 2017 car regulations would produce, which were far higher than expected. In the meantime, the temporary countermeasure was to overfill the oil tank to ensure flow was always available, this came with its own issues however, often overflowing into the intake system and damaging other components. This compromised running significantly and made completing testing programs for both the new engine and new chassis difficult. At the second week of pre-season testing and the revised oil tank fitted, other issues were observed. Again, unprecedented forces from the new 2017 regulation car being transmitted through the power unit, which acts as a stressed member of the car, also brought extreme drivetrain vibrations which caused various ancillary failures and more distressingly, a persistent catastrophic turbo/MGU-H shaft failure. The design of the MGU-H housing for the RA617H saw it cast as part of the cylinder block in efforts to improve strength, however this solution upon malfunction, would often terminally damage the ICE as well. The power unit issues during testing were so numerous that McLaren Honda found themselves changing complete power units endlessly, rather than identifying and fixing issues in an effort to gain as much mileage as possible on the car so chassis testing could be carried out. Over the season, the engine suffered a severe amount of failures. Honda admitted it was taking longer than hoped to understand how to operate the new engine or come up with a viable long term fix for the vibration and shaft balance issues destroying the MGU-H. The engine was now being run in a state of tune that produced less power than the RA616H it replaced from the year prior for reliability concerns and even with consistent upgrades being provided through the season, progress was initially slow. This tarnished the relationship between Honda and McLaren with both parties showing public frustration with the other. The power unit eventually started to produce signs of progress in the latter half of the season as Honda brought countless updates to the Pre-Chamber Ignition system with varying jet orifice counts and sizes to gain a better understanding in its operation and various other improvements to related systems and finally reliability visibly began to improve and Honda was able to turn the power back up to where it produced more power than the RA616H it replaced. RA618H The RA618H was developed for the 2018 Formula One season and was the first Honda engine to power Scuderia Toro Rosso in the STR13. It was a much more mature development of Honda's brand new architecture introduced with the RA617H the year prior. The power unit benefitted from Toro Rosso being significantly more relaxed on the engines dimension requirements than McLaren, with the team asking Honda to build the engine how they saw fit and they would build the car around it. Immediately, the engine was significantly more reliable than the RA617H, only suffering three engine related retirements over the entire season. The troublesome MGU-H was totally redesigned with input from Honda's aerospace division HondaJet, lending their significant experience in precision turbofan design. The MGU-H shaft and attached turbines spin at up to a regulation limit 125,000rpm and with the unique design of the split turbo assembly, getting such a long shaft to be perfectly balanced at such high rotational speeds was paramount to the assembly's reliability. As such, the shaft had its size and shape completely altered, its bearings improved and received an entirely new structural support mechanism which largely eliminated the resonance/vibration issues that affected the previous iteration. The MGU-H rotor was totally redesigned and now housed in a pressurised rotor chamber which further enhanced performance and reliability and received enhanced magnets to improve performance. Now having more real estate on the car available to work with, the air intake funnel was enlarged and shaped more optimally for improved airflow into the engine, the exhaust manifold was further optimised and routed differently to suit the STR13 bodywork while gaining a net performance improvement, the cooling system designed for use in the STR13 was substantially more effective, the engine received structural reinforcements and material changes to better handle the forces experienced by the new generation cars and as such, the power unit increased in weight slightly, although still remained the lightest of the field. The MGU-K had its bearing supporting structure modified to improve reliability and also received higher efficiency electromagnets. During the Canadian GP, Honda introduced the "Spec 2" version of the power unit which consisted of changes to the ICE and combustion system, solely in pursuit of performance. The engine proved to deliver a noticeable leap forward in power and it was this power unit that is believed to of pushed Honda ahead of rival Renault in performance and reliability. This performance gain, steep development curve and trajectory, investment into their facilities and drive to win was enough to convince Red Bull Racing to sign Honda as their power unit supplier from the 2019 season to replace Renault. High Speed or Rapid Combustion During research and testing in 2018 at HRD headquarters in Sakura, Honda engineers while running a prototype 2019 engine on the dyno, noticed a combustion phenomenon whereby cylinder pressures and power output would skyrocket momentarily before disappearing again. This occurred several times before it was discovered this was due to a mismatch of certain specification parts being fitted on the test engines. This led to Honda implementing a vastly improved quality check process, but it also encouraged a smaller team of engineers to look deeper into this phenomenon. Red Bull Racing had also requested that Honda bring as many updates to its engines on track as physically possible, regardless of penalties incurred to Red Bull owned Scuderia Toro Rosso, if it would speed up development in preparation for the 2019 season. The result was the "Spec 3" test engine which made its debut at the 2018 Russian Grand Prix. This engine was a prototype for a new kind of combustion process discovered on the dyno's in Sakura after further research and development was undertaken to understand the phenomenon. Dubbed "rapid combustion" by Honda, it is a process in that a significantly more complete yet violent combustion event takes place under certain conditions which drastically improves output and efficiency of the engine. In a conventional engine, "flame propagation combustion" is utilised, where the flame front within the combustion chamber begins at the spark plug electrode and propagates outward towards the cylinder walls. This propagation takes time to fully occur however, so optimal ignition timing is dictated by the speed of this propagation so that the peak of the combustion event occurs at the most optimal piston position during the power stroke for maximum force to be applied to the crankshaft. Because this event takes time to complete, the energy released is not all occurring at this optimal point so maximum work potential is lost. For example, the ignition timing is usually set for the spark to occur as the piston is still travelling upwards as that will allow for the peak of the combustion event to occur at the optimal piston position (precisely as it begins its downward stroke) for maximum work/power to be exerted on the crankshaft. With advancing it this far (spark beginning too early) however, to compensate for the burn time, can lead to combustion initially working against the piston movement or pre-ignition (detonation before intended), this causes knock and stress on the system and can also reduce power. With this new combustion process ("rapid combustion"), conditions such as compression ratio and air/fuel mixture are optimised so flames are even-generated around the circumference of the combustion chamber, where self ignition is triggered by the pressure shock from the jet flames fired from the pre-chamber orifices. While the jet flames spread out to the circumference from the centre of the combustion chamber, the other flames spread from the circumference toward the centre, so combustion proceeds significantly more rapidly. This massively increases the burn speed, and so a much larger portion of the energy released occurs at the most optimal piston position during each power stroke, this increased maximum power and efficiency significantly, and due to this increase in burn speed, ignition timing could be more specific and optimised, also removing this small amount of combustion working against the piston on the upward stroke, further increasing the efficiency of the power unit. Combustion of this type allows for a much leaner than typical air/fuel ratio to be used which would otherwise be difficult to ignite in a conventional combustion engine. This process required a substantial increase in compression ratio to be made, and although very beneficial to efficiency, was extremely difficult to control and brought significant increases to cylinder pressures, which affected the durability of the ICE. The Spec 3 engine was tested in a car for the first time during practice sessions at the Russian Grand Prix, with drivers reporting feeling a substantial increase in power, however there was a lot of vibration and unusual gearbox shift behaviour present. This was largely caused by the difficulty in stabilising the new combustion process during sudden RPM drops experienced during gearshifts. After the practice sessions, Honda refitted the previous Spec 2 engines for qualifying and the race as planned. The engine was found to have gained approximately 40-50hp just with this change initially, which was substantially more than estimations suggested. This increase and the method in which it produced this, pushed the engine past its structural design limits and as such needed further work on the engine to be able to withstand the increased power level, unique combustion process and further synchronisation work with the Red Bull sourced gearbox to smooth out gear change behaviour. The new combustion process was in its infancy and initially very unstable, so required more development time to understand and refine the power unit. RA619H The RA619H was the first Honda power unit to be supplied to Red Bull Racing, debuting in the RB15. It was also the first time Honda had supplied two teams in the current hybrid era of Formula One simultaneously, still continuing their relationship with Scuderia Toro Rosso, powering the STR14. The engine was a further refinement of the initial RA617H engine concept introduced in 2017. Following on from the RA618H, the new engine had a more complete version of the "rapid combustion" process. Honda had refined the calibration and hardware to better utilise the combustion method and fuel partner ExxonMobil had developed a new type of fuel that stabilised the process in time for race debut. The engine had improvements across the board with refinements to the induction and exhaust systems for optimal packaging while bringing performance gains, the MGU-K had refined mechanical components to improve reliability under high temp operation, the electromagnetic internals were largely carried over from 2018. The MGU-H had a new stator, designed to work under increased water temperatures with a smaller radiator, the MGU-H motor now had the ability to be pushed harder in certain high performance engine modes, the connecting shaft was increased in length to accommodate a new larger compressor but the dynamics of this shaft were further improved for reliability. Throughout this season, Honda focused on improving energy management and calibration to improve driveability and extract optimal performance from the power unit across the season. During the French GP, Honda introduced a new compressor that greatly increased compressor efficiency by improving its aerodynamic performance. At the same time it had discovered a method in the calibration to largely maintain targeted performance during high ambient temperatures and altitudes. For the following race, they implemented these findings for the Austrian GP which is held at over 750m above sea level, where the engine performed faultlessly at high power in high temperatures where rival manufacturers were running into cooling issues. Max Verstappen went on to win this race which he attributes to Honda's constant efforts and willingness to keep pushing the envelope. By this point, the RA619H's power output was closely matched to the rival Mercedes engine, making up significant ground since Honda's return in 2015. RA620H The RA620H was the fourth and final iteration of the engine architecture Honda introduced in 2017, powering the Red Bull Racing RB16 and the Scuderia AlphaTauri AT01 in the 2020 Formula One season. The new combustion process, discovered in 2018 and fully integrated into the 2019 RA619H, provided substantial power and efficiency gains, however, the massively increased chamber pressure and often unstable detonation under certain conditions came at the cost of durability for the internal combustion engine. For the RA620H, Honda developed a new type of surface plating named "Kumamoto Plating" or "K-Plating". This patented method was applied to the cylinder bore and other high-stress surfaces. This drastically reduced the cylinder wear under high-speed combustion operation, allowing further steps to improve the efficiency and operation of this combustion process. The 2019 MGU-K and MGU-H underwent strenuous durability and performance testing over the winter break and their limits were identified. So for 2020, the MGU-K was run harder and more aggressively, giving more torque in driving, and more regenerated electricity under engine-braking conditions. A more aggressive regeneration calibration was applied in a newly created operation mode known as "Extra Harvest". This mode allowed for large energy regeneration to occur in a short period, which placed more stress on the unit, so it was used only when most effective. The compressor was enlarged further to increase engine efficiency and the turbine housing was now 3D Printed (additive manufacturing) with Inconel, allowing more complex shapes to be created. Engine ancillaries were refined further to reduce parasitic losses. With the ability to produce components with additive fabrication, the MGU-H housing was also produced with a 3d printer, increasing strength and reducing weight. The ICE's steel pistons were also 3D Printed which allowed for ribs and indentations in places that were not possible when forged and machined, further increasing strength and again reducing weight. To find more efficiency gains, Honda developed a device named the "Charge Air Cooler 2" or CAC2. Early variants of this were introduced on the 2016 engine and are a major part of the signature "Honda sound" the power units make on deceleration and downshifts. The CAC2 routes compressed air, that would otherwise be vented to the atmosphere, into the combustion chambers of "non-firing" cylinders. This re-routed compressed air has multiple benefits but primarily, it allows additional drive force for the turbine to be stored in cylinders when they are not firing, which has a faux "anti-lag" effect. This can be used to spool the turbo up without using battery power or be used to regenerate more energy with the MGU-H in regeneration mode. Secondly, this compressed air has a cooling effect within the chamber increasing durability and knock resistance. The RA620H can be viewed as a mid-process example of Honda's usual conservative approach to power units, first ensuring reliability, then switching to an aggressive push to maximise the potential once the reliability is established, and then back to obtaining reliability. Honda believed they saw this switching point in the mid-2019 season and work had begun on a brand new, ground up design (RA621H) planned for introduction within two years. RA621H After initially starting to match Mercedes' power unit performance in 2019, the performance step made in 2020 with the M11 EQ Performance power unit, even compared with Honda's own gains, was substantial. This had convinced Honda to accelerate development of its new engine which was originally planned for debut in 2022 along with the new car regulations. The late decision meant they had just six months to complete the engine and have it race ready for the 2021 Formula One season. Honda admitted this would be a monumental challenge and posed a lot of risks, but believed this would be the only way to overtake Mercedes. Team partner Red Bull Racing, after seeing the proposed improvements the new power unit would bring, gave their blessing and as such, the RA621H was born, powering the Red Bull Racing RB16B and the AlphaTauri AT02. This engine was a drastic change from the previous iterations, with the entire core ICE architecture being redesigned for the first time. The camshaft layout was altered and now significantly more compact, lower down and closer together in the cylinder head. This changed the valve angle and shape of the combustion chamber, increasing the compression ratio of the ICE, lowering it's centre of gravity and altering the airflow characteristics. The cylinder block was now machined from a single piece of new billet alloy as opposed to cast, drastically increasing the blocks strength which allowed the cylinder bore pitch to be reduced, placing the cylinders closer together. These major structural changes created a significant reduction in the overall size of the engine in height, length and width. In discussion with Red Bull Racing and Alpha Tauri, the bank offset was reversed, the induction manifold was further optimised for performance and packaging and the exhaust manifolds were redesigned and became an asymmetric setup being radically different from left to right banks to improve exhaust pulse tuning and better optimise packaging within both cars. The MGU-H had vastly improved magnets and a new insulator, improving the cooling performance while also increasing power and torque output and the MGU-K had its gear ratio revised which further improved regeneration under coasting or braking and overall torque output under motoring, and had a brand new housing which was more resistant to vibration. With the vastly improved thermal efficiency of the new ICE came a trade-off, a reduction in waste exhaust energy output which negatively impacts turbocharger and hybrid regeneration performance. The key for Honda engineers was to play a balancing act in increasing ICE efficiency and crank power output but without losing too much exhaust heat energy which would harm regeneration potential for the battery. To mitigate the losses, the turbocharger assembly was revised in attempt to more efficiently harvest what was available and the compressor was increased in size again with the compressor wheel having significant changes made to the blades and the turbine itself was also modified. Carbon Nano Tube (CNT) Energy Store Honda also introduced a brand new energy store (ES) for the first time, which had been in the developmental phase since 2016. The aim was to create a battery that combined improvements in energy efficiency with significant reductions in weight. The technology utilised in the new ultra-high energy battery cell was revealed to be an in-house Honda developed ES utilising Carbon Nano Tubes (CNT), battery cell electrodes containing carbon particles, and electricity flows through these particles. By filling the spaces between carbon particles with the nanometer-sized, tube-shaped CNTs (with a diameter of one millionth of a millimetre), it is possible to achieve extremely low resistance and enable electricity to flow more freely. Demonstrating significant reductions in resistance and deterioration over its lifespan, the power unit's energy harvesting and deployment characteristics improved substantially. The volumetric output density (Watt per kg) of the new ES introduced in the 12th round of the 2021 season is 1.3 times greater (130% improvement) than that of the ES used in 2020 and 2021. This allowed the cars to have deployable hybrid energy available much more of the time and much faster regeneration capability compared to the rest of the field. As compared to the 2015 Energy Store, even with the dramatic improvement in density and efficiency, the brand new battery is 26% smaller and 15% lighter, which contributes greatly from the perspectives of energy management and optimising vehicle driving performance. All these changes added up to a power unit that was significantly more powerful and reliable, while becoming even smaller and lighter than the original RA615H "Size-Zero" engine of 2015. This provided massive gains to teams, now able to create more aggressive body packaging which aided aero development. The power unit quickly became the front runner of the field with superior power, energy recovery abilities with outstanding reliability and was a key factor in driving Max Verstappen to his first ever World Drivers Championship in the 2021 season. RBPTH001 The RBPTH001 is a development of the RA621H designed for use in the 2022 Formula 1 World Championship (and subsequently the 2023 Formula 1 World Championship) powering the Red Bull Racing RB18 and AlphaTauri AT03 in 2022 and the Red Bull Racing RB19 and AlphaTauri AT04 in 2023. It represents the final permitted power unit design change before the engine freeze began on March 1st, 2022. After Honda's formal F1 exit, the engines remain Honda-developed, produced, assembled, maintained, and trackside supported, and will remain as such until the end of the 2025 season when a new engine era will begin. Honda developed the 2022 RBPTH001 power unit at its research and development centre in Sakura City, Tochigi prefecture, run by Honda’s racing subsidiary, HRC (Honda Racing Corporation). 10% Ethanol fuel (E10) introduced The main developments of the engine were to accommodate the use of the new E10 fuel and the challenges it brought. The construction of the ethanol molecule means it has a lower calorific value as a combustible component compared to an equivalent volume of petrol, making the combustion process less potent and therefore, power is reduced. This is usually compensated by combusting a larger quantity of fuel (and a corresponding increase of air, usually in the form of increased boost pressure) to make up for the lower energy density, however this is not an option as F1 regulations restrict fuel flow to 100kg/hour, so to regain lost power potential, the engine would need to be pushed harder and made more thermally efficient to extract more of the available fuel energy content. There are some exploitable beneficial characteristics of ethanol however. The new E10 fuel blend is more resistant to detonation, allowing engines to be run in a higher stress state if well controlled, so for the RBPTH001, along with significant improvements to all the technologies introduced over the past 6 years to the power unit, the result was a major evolution that made even the regulation maximum compression ratio of 18:1 feel restrictive despite it being an enormously high target when Honda first rejoined F1 in 2015. This along with raising the nominal boost pressure, helped to drive combustion efficiency higher and mitigate power losses brought from the lower energy fuel, essentially harnessing more of the available fuel energy to compensate. This greatly increases the stress on the power unit though, with cylinder pressures now the highest they have been. So the combustion chamber and mechanism was further developed to accommodate the new burn characteristics and lower calorific fuel, the bottom end internals were strengthened, and the ignition timing map was altered completely from the 2021 engine. The MGU-H and turbine were re-tuned to better cope with the E10 exhaust gas density change and further unique internal changes to reduce crevice losses in combustion were made. Ethanol also has a higher latent heat of vaporisation than gasoline, so the increased ethanol content brought a charge cooling effect, reducing combustion chamber temperatures, this benefit allowed the previously mentioned changes to be made and allowed Honda to increase the nominal running water temperature of the engine, this meant it required less cooling and provides a further aerodynamic benefit to teams building the car, being able to reduce cooling inlet and outlet sizes. The crankshaft and cylinder block geometry were adjusted to ensure reliability with the new E10 combustion conditions, while a further development of Honda's Kumamoto plating was applied to the cylinder bores. The resulting power unit weighs slightly more than the RA621H, just from strengthened internal components. Even challenged with the reduced energy content of the new fuel, the 2022 engine achieved a higher thermal efficiency value than the 2021 engine. With road car engines achieving between 20-35%, Honda had achieved a result well over 50%, making the RBPTH001 one of the most power dense and efficient gasoline reciprocating engines ever developed. RBPTH002 THE RBPTH002 was developed for use in the 2024 Formula One season, replacing the RBPTH001 that was used in the 2022 and 2023 seasons. Currently powering the Red Bull Racing RB20 and Visa Cash App RB VCARB01. With the engine freeze in place since 2022, power unit development has been restricted to reliability improvements only. For 2024, Honda made further improvements to the RBPTH001 to increase the engines durability, this increased the reliability of the power unit with increased racing distance thresholds before degradation began to reduce performance. The gains allowed teams to run the engine in a higher power state (more aggressive engine modes) for longer periods of time without any negative effects. This allows for a more performative power unit during the season. Specific details of the changes have not as yet been disclosed to the public but the changes made were drastic enough to warrant a new designation in the engine series, as the RBPTH002. Season statistics for hybrid era Honda engines ** Historical record for most wins in a season - Red Bull Racing References Engines by model Gasoline engines by model V6 Hybrid Formula One Power Unit Formula One engines V6 engines Honda in motorsport Honda in Formula One
Honda V6 hybrid Formula One power unit
[ "Technology" ]
7,634
[ "Engines", "Engines by model" ]
70,914,136
https://en.wikipedia.org/wiki/Penitanzacid%20F
Penitanzacid F was found as one of the twelve new tanzawaic acid derivatives, which were the secondary metabolites of the fungi Pencillum sp. KWF32 isolated from the tissues of Bathymodiolus sp. collected in the cold spring area of the South China Sea in 2021. It may have anticoccidial, cytotoxic, lipid-lowering, superoxide anion production inhibiting, bacterial conjugation inhibiting, and NO production inhibiting properties as a tanzawaic acid derivative. Structure and biosynthesis The biosynthesis of Penitanzacid F starts from one acetyl-CoA, two methylmalonyl-CoA and three malonyl-CoA molecules with polyketide synthase (PKS). Then the product undergoes Diels-Alder Cyclization, chain elongation with two malonyl-CoA, and is oxidized to penitanzaicacid F. References Carboxylic acids Ketones Decalins Tertiary alcohols Dienes
Penitanzacid F
[ "Chemistry" ]
222
[ "Ketones", "Carboxylic acids", "Functional groups" ]
70,916,007
https://en.wikipedia.org/wiki/Karen%20Bakker
Karen Bakker (6 December 1971 – 14 August 2023) was a Canadian author, researcher, and entrepreneur known for her work on digital transformation, environmental governance, and sustainability. A Rhodes Scholar with a DPhil from Oxford, Bakker was a professor at the University of British Columbia. In 2022–2023 she was on sabbatical leave at Harvard, as a Harvard Radcliffe Institute Fellow. She was the recipient of numerous awards, including a Guggenheim Fellowship, Stanford University's Annenberg Fellowship in Communication, Canada's "Top 40 Under 40", and a Trudeau Foundation Fellowship. Bakker's research focused on the intersection of digital technologies and environmental governance, digital environmental humanities, digital geographies, political ecology, and political economy. In the early part of her career, she focused on water and climate issues. Later, she concentrated on digital technology and environmental futures studies as critical yet pragmatic projects aiming to advance regenerative sustainability and environmental justice. Career Bakker was born in Montreal and raised in Ottawa. She trained in both the natural and social sciences at McMaster University (a combined Bachelor of Arts and Science (minor in Physics), followed by a DPhil in geography at the University of Oxford). She published over 100 academic publications, including seven sole-authored and edited scholarly books. Her work has been cited over 18,000 times. She also served as a policy advisor to organizations at the forefront of digital innovation on environmental issues, including the Digital Research Alliance of Canada, Future Earth, Sustainability in a Digital Age, and the International Institute for Sustainable Development. Her advisory roles have also included the IPCC, National Round Table on Environment and Economy, OECD, UNDP, UNEP, UNESCO, and OHCHR. Bakker was a member of the Decolonizing Water research collective and the Riverhood project team (funded by the EU), as well as the Coalition on Digital Environmental Sustainability, and the Policy Network on Environment of the Internet Governance Forum. She was also a board member of the National Research Council Canada, and a member of the editorial board of Global Environmental Change. Bakker delivered over 200 conference presentations and invited lectures over the course of her career, at academic institutions such as Berkeley, Harvard, Stanford, and UCLA. These span several disciplines including geography and environmental studies, computer science, urban studies, labour studies, political ecology, and political economy. Digital transformation and sustainability: The Smart Earth Project Bakker's Smart Earth project engaged with two of the most destabilizing, controversial trends of our time: digital transformation and global environmental change. Smart Earth brings together researchers, educators, and policymakers to study environmental knowledge and seeks to better understand the complex relationships between humans and nature. This project was launched with a meta-review of a smart technologies database in 2018. Bakker curated a website with learning tools regarding digital technologies and their application to environmental issues, and has collaborated with the United Nations Environment Program to map out a roadmap for international action on digital transformation and sustainability. Interspecies communication and bioacoustics: The Sounds of Life Bakker worked at the intersection of data and sustainability, exploring how technology can be leveraged to better protect, understand, cohabitate, and perhaps even communicate with our non-human counterparts. Bakker wrote critically about the potential pitfalls of the digital listening agenda, comparing it to an environmental variant of surveillance capitalism. In October 2022, Bakker published her book: The Sounds of Life: How Digital Technology is Bringing Us Closer to the Worlds of Animals and Plants (Princeton University Press). The book was chosen as the NPR Science Friday Book Club book of the month for November 2022, selected as one of Malcolm Gladwell's Next Big Idea Club nominees in October 2022, and received both popular and critical acclaim, including a review in Science, which described the book as "thoughtful and rigorous…meticulously researched and colorfully presented…in a way that is accessible to non-experts. A wonderful mix of animal ecology, narratives of science-doing, futurism, and accounts of Indigenous knowledge that is as interdisciplinary as the field itself." She was invited to present the book at Google Talks, Aspen Ideas Festival and was the opening keynote at the TED 2023 conference. Water governance Bakker also worked broadly on issues of water accessibility, governance, and policy. Her publications include Privatizing Water: Governance Failure and the World's Urban Water Crisis (Cornell University Press), An Uncooperative Commodity: Privatizing Water in England and Wales (Oxford University Press), "Neoliberalizing Nature? Market Environmentalism in Water Supply in England and Wales" (2005), and "Water security: Debating an emerging paradigm" (2012). The Privatizing Water book was awarded the Urban Affairs Association Book Award (2011; honourable mention) and the Rik Davidson/Studies in Political Economy Book Prize (2012). As Karen Le Billon Writing under her nom de plume, Karen Le Billon, Bakker wrote two popular science books on children, food, and families. French Kids Eat Everything (HarperCollins, 2012) was published in 15 countries and 12 languages, awarded the Taste Canada Food Writing Award in 2013, and widely featured in the press, including The New York Times, The Guardian, and The Sunday Times. The follow up book Getting To Yum: The 7 Secrets Of Raising Eager Eaters (HarperCollins, 2014) was also well received by experts and the public. Death Bakker died after a brief illness on 14 August 2023. Notable works Books Gaia's Web: How Digital Environmentalism Can Combat Climate Change, Restore Biodiversity, Cultivate Empathy, and Regenerate the Earth (Penguin Random House Canada, 2024) The Sounds of Life:: How Digital Technology is Bringing Us Closer to the Worlds of Animals and Plants (Princeton University Press, 2022) . Water Teachings, K. Bakker and C. Crane (eds) (2020) Privatizing Water: Governance Failure and the World's Urban Water Crisis (Cornell University Press, 2010) . Eau Canada: The future of Canada's water (UBC Press, 2007) . An Uncooperative Commodity: Privatizing Water in England and Wales (Oxford University Press, 2004) . Chapters and Articles (Peer-Reviewed) References External links Water Governance Project Decolonizing water curriculum Official UBC Profile UBC Dr. Karen Bakker Memorial Fund 1971 births 2023 deaths Sustainability scientists 21st-century Canadian women writers Academic staff of the University of British Columbia Canadian women academics McMaster University alumni Canadian Rhodes Scholars Canadian geographers Women geographers Scientists from Montreal
Karen Bakker
[ "Environmental_science" ]
1,368
[ "Sustainability scientists", "Environmental scientists" ]
70,922,122
https://en.wikipedia.org/wiki/GIS%20Arta
GIS Arta or GIS Art for Artillery is military software used to coordinate artillery strikes. It has been used in the 2022 Russian invasion of Ukraine by the Armed Forces of Ukraine. It has fast targeting (one minute), it does not require reconnaissance units to use specialized devices (they use smartphones), and it does not require artillery pieces to be clustered together. It has been compared to the German artillery software ESG Adler. It was developed by Ukrainian programmers, with involvement by British digital map companies. See also Delta (situational awareness system) References External links GIS Arta, Official website Location-based software Military equipment of the Russian invasion of Ukraine
GIS Arta
[ "Technology" ]
137
[ "Computing stubs", "Software stubs" ]
70,925,428
https://en.wikipedia.org/wiki/Tetracenomycin%20C
Tetracenomycin C is an antitumor anthracycline-like antibiotic produced by Streptomyces glaucescens GLA.0. The pale-yellow antibiotic is active against some gram-positive bacteria, especially against streptomycetes. Gram-negative bacteria and fungi are not inhibited. In considering the differences of biological activity and the functional groups of the molecule, tetracenomycin C is not a member of the tetracycline or anthracyclinone group of antibiotics. Tetracenomycin C is notable for its broad activity against actinomycetes. As in other anthracycline antibiotics, the framework is synthesized by a polyketide synthase and subsequently modified by other enzymes. Structure and properties The structure of tetracenomycin C was established by chemical and spectroscopic methods. The three hydroxy groups, at C-4, C-4a, and C-12a, are cis to each other. The two at C-4a and C-12a are involved in intramolecular hydrogen bonding to the carbonyl oxygen atoms at C-5 and C-1, respectively. The carboxymethyl group at C-9 is almost perpendicular to the planar rings C and D. The crystal packing is stabilized by intermolecular hydrogen bonds with participation of methanol molecules. Biosynthesis As in other anthracycline antibiotics, the framework is synthesized by a polyketide synthase and subsequently modified by other enzymes. Early studies of tetracenomycin C biosynthesis utilized mutants that were blocked in its production to describe many of the pathway's intermediates. Complementation of the mutations allowed the cloning of a large gene cluster that included all of the genes required for production, as well as resistance genes. Transformation of the cluster into heterologous streptomycete hosts like Streptomyces lividans resulted in the overproduction of several intermediates of the pathway. Sequence analysis of the polyketide synthase genes showed that they included two β-ketoacyl synthases (tcmK and tcmL), an acyl carrier protein (tcmM), and several cyclases. Streptomyces glaucescens protects itself from the deleterious effect of tetracenomycin C by the action of the tcmA and tcmR gene products. TcmA has several transmembrane loops and is believed to act as a tetracenomycin C exporter. Its expression is controlled by the TcmR repressor. TcmR binds to operator sites in the tcmA promoter. When tetracenomycin C is present, it binds to TcmR, releasing it from the DNA and initiating tcmA expression. References Naphthalenes Carboxylic acids Alcohols Triketones Methoxy compounds Antibiotics
Tetracenomycin C
[ "Chemistry", "Biology" ]
615
[ "Biotechnology products", "Carboxylic acids", "Functional groups", "Antibiotics", "Biocides" ]
70,926,536
https://en.wikipedia.org/wiki/Mean%20transverse%20energy
In accelerator physics, the mean transverse energy (MTE) is a quantity that describes the variance of the transverse momentum of a beam. While the quantity has a defined value for any particle beam, it is generally used in the context of photoinjectors for electron beams. Definition For a beam consisting of particles with momenta and mass traveling prominently in the direction the mean transverse energy is given by Where is the component of the momentum perpendicular to the beam axis . For a continuous, normalized distribution of particles the MTE is Relation to Other Quantities Emittance is a common quantity in beam physics which describes the volume of a beam in phase space, and is normally conserved through typical linear beam transformations; for example, one may transition from a beam with a large spatial size and a small momentum spread to one with a small spatial size and a large momentum spread, both cases retaining the same emittance. Due to its conservation, the emittance at the species source (e.g., photocathode for electrons) is the lower limit on attainable emittance. For a beam born with a spatial size and a 1-D MTE the minimum 2-D ( and ) emittance is The emittance of each dimension may be multiplied together to get the higher dimensional emittance. For a photocathode the spatial size of the beam is typically equal to the spatial size of the ionizing laser beam and the MTE may depend on several factors involving the cathode, the laser, and the extraction field. Due to the linear independence of the laser spot size and the MTE, the beam size is often factored out, formulating the 1-D thermal emittance Likewise, the maximum brightness, or phase space density, is given by References Accelerator physics
Mean transverse energy
[ "Physics" ]
366
[ "Accelerator physics", "Applied and interdisciplinary physics", "Experimental physics" ]
63,747,637
https://en.wikipedia.org/wiki/Enalapril/hydrochlorothiazide
Enalapril/hydrochlorothiazide, sold under the brand name Vaseretic among others, is a fixed-dose combination medication used for the treatment of hypertension (high blood pressure). It contains enalapril, an angiotensin converting enzyme inhibitor, and hydrochlorothiazide a diuretic. It is taken by mouth. The most frequent side effects include dizziness, headache, fatigue, and cough. History Enalapril/hydrochlorothiazide was approved for medical use in the United States in October 1986. References Further reading ACE inhibitors Combination antihypertensive drugs Diuretics Prodrugs
Enalapril/hydrochlorothiazide
[ "Chemistry" ]
142
[ "Chemicals in medicine", "Prodrugs" ]
63,750,068
https://en.wikipedia.org/wiki/Natalie%20Prystajecky
Natalie Anne Prystajecky a Canadian biologist and the Environmental Microbiology program at the British Columbia Centre for Disease Control Public Health Laboratory. She holds a Clinical Assistant Professor position at the University of British Columbia. During the COVID-19 pandemic Prystajecky was involved with the development COVID-19 testing capabilities. Early life and education Prystajecky studied environmental science and biology at the University of Calgary. She moved to British Columbia as a graduate student, where she first worked toward a certificate in watershed management. In 2010 Prystajecky earned her doctoral degree at the University of British Columbia. Her research considered epidemiological studies of Giardia spp. Research and career After completing her doctorate, Prystajecky joined the British Columbia Provincial Health Services Authority, where she led British Columbians through outbreaks of norovirus and influenza. At the time, Prystajecky's advice was to “wash your hands all the time, and soap and water is the best,”. Prystajecky leads the Environmental Microbiology program at the British Columbia Centre for Disease Control Public Health Laboratory. She investigates the relationship between environmental exposures and clinical outcomes. To do this, Prystajecky developed technology for genome sequencing. She has used these genomic technologies to search for pathogens that might cause foodborne illnesses. Prystajecky has used metagenomics to test for bacteria and viruses in water in an effort to improve the health of people and ecosystems. In early 2020 Prystajecky was involved in two British Columbian oyster One Health studies named UPCOAST-V for Vibrio parahaemolyticus and UPCOAST-N for Norovirus, Improved detection of the viruses will help to reduce the spread of disease and help the Canadian oyster industry. During the COVID-19 pandemic Prystajecky was involved with the development COVID-19 testing capabilities. The first quantitative PCR acid was shared by researchers in Wuhan with World Health Organization, and forms the basis of many COVID-19 tests, including those developed by Prystajecky. In particular, Prystajecky looked to reduce the time taken between testing and obtaining results in an effort to understand transmission and protect vulnerable members of the population. The British Columbia Centre for Disease Control program that conducts the testing is known as Responding to Emerging Serious Pathogen Outbreaks using Next-gen Data (RESPOND), and makes use of genome sequencing to identify which patients have been infected by the disease. Selected publications Personal life Prystajecky has two children. References Living people Year of birth missing (living people) Canadian women biologists Pathogen genomics University of Calgary alumni University of British Columbia alumni Public health researchers 21st-century Canadian biologists 21st-century Canadian women scientists
Natalie Prystajecky
[ "Biology" ]
564
[ "Molecular genetics", "DNA sequencing", "Pathogen genomics" ]
63,754,763
https://en.wikipedia.org/wiki/COVID-19%20Immunity%20Task%20Force
The COVID-19 Immunity Task Force (CITF) is one of the Government of Canada's early efforts to track the 2020 coronavirus pandemic. An external, dedicated secretariat was established in order to maximize the efficiency of the CITF's work. Purpose The CITF was to use a serology "to survey representative samples of the population for the presence of antibodies to the virus". Trudeau's press release on 23 April 2020, on the initiation of the CCITF listed several goals it would help to achieve notably that it would: A Vaccine Surveillance Reference Group (VSRG) was also established within the CITF to monitor the safety and effectiveness of COVID-19 vaccines made available in Canada. Task Force membership The CITF Board is composed of doctors, infectious disease experts, and policy makers. Leadership Group Executive Committee David Naylor, Co-chair Catherine Hankins, Co-chair Timothy Evans, Executive Director Heather Hannah Mona Nemer Howard Njoo Gina Ogilvie Jutta Preiksaitis Gail Tomblin Murphy Paul Van Caeseele Government of Canada representatives Theresa Tam, Chief Public Health Officer of Canada Mona Nemer, Chief Science Advisor of Canada Stephen Lucas, Deputy Minister of Health of Canada Members The CCITF leadership group expanded on 2 May 2020. Its additional members as of March 2022 are: Provincial & Territorial representatives Shelly Bolotin, Ontario Marguerite Cameron, Prince Edward Island Catherine Elliott, Yukon Richard Garceau, New Brunswick Heather Hannah, Northwest Territories Mel Krajden, British Columbia Christie Lutsiak, Alberta Richard Massé, Quebec Jessica Minion, Saskatchewan Michael Patterson, Nunavut Gail Tomblin Murphy, Nova Scotia Paul Van Caeseele, Manitoba References External links Covid-19 Test Kits Serology Blood tests Epidemiology Immunologic tests Funding bodies of Canada COVID-19 pandemic in Canada National responses to the COVID-19 pandemic Health Canada Clinical pathology Scientific organizations based in Canada Scientific organizations established in 2020 2020 establishments in Canada Task forces established for the COVID-19 pandemic
COVID-19 Immunity Task Force
[ "Chemistry", "Biology", "Environmental_science" ]
425
[ "Blood tests", "Immunologic tests", "Epidemiology", "Chemical pathology", "Environmental social science" ]
75,217,515
https://en.wikipedia.org/wiki/Ocedurenone
Ocedurenone, formerly known as KBP-5074, is a nonsteroidal, selective mineralocorticoid receptor antagonist that is being developed to treat hypertension in patients with chronic kidney disease with less risk of hyperkalemia than existing treatments. In 2023, KPB Biosciences entered into talks to sell the drug to Novo Nordisk for USD$1.3 billion. It is a small molecule drug administered orally and is in a Phase III trial that is scheduled to complete in 2024. References Antimineralocorticoids Antihypertensive agents Piperidines Benzonitriles Chloroarenes
Ocedurenone
[ "Chemistry" ]
139
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
75,220,295
https://en.wikipedia.org/wiki/Sotatercept
Sotatercept, sold under the brand name Winrevair is a medication used for the treatment of pulmonary arterial hypertension. It is an activin signaling inhibitor, based on the extracellular domain of the activin type 2 receptor expressed as a recombinant fusion protein with immunoglobulin Fc domain (ACTRIIA-Fc). It is given by subcutaneous injection. The most common side effects include headache, epistaxis (nosebleed), rash, telangiectasia (spider veins), diarrhea, dizziness, and erythema (redness of the skin). Sotatercept was approved for medical use in the United States in March 2024, and in the European Union in August 2024. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication. Medical uses In the United States, sotatercept is indicated for the treatment of adults with pulmonary arterial hypertension (PAH, WHO Group 1). In the European Union, sotatercept, in combination with other pulmonary arterial hypertension therapies, is indicated for the treatment of pulmonary arterial hypertension in adults with WHO Functional Class (FC) II to III, to improve exercise capacity. Side effects The most common adverse reactions include headache, epistaxis, rash, telangiectasia, diarrhea, dizziness, and erythema. Sotatercept causes increases in hemoglobin (red blood cells). High concentrations of red blood cells in blood may increase the risk of blood clots. Sotatercept causes decreases in platelet count, which can result in bleeding problems. Based on findings in animal studies, sotatercept may impair female and male fertility and cause fetal harm when administered during pregnancy. History The US Food and Drug Administration (FDA) approved sotatercept based on evidence of safety and effectiveness from a clinical trial of 323 participants with PAH (WHO group 1 functional class II or III). The trial was conducted at 126 sites in 21 countriesArgentina, Australia, Belgium, Brazil, Canada, the Czech Republic, France, Germany, Israel, Italy, Mexico, the Netherlands, New Zealand, Poland, Serbia, South Korea, Spain, Sweden, Switzerland, the United Kingdom, and the United States. The study included 88 participants inside the United States (43 in the sotatercept group and 45 in the placebo group). Society and culture Legal status Sotatercept was approved for medical use in the United States in March 2024. The FDA granted the application breakthrough therapy designation. In June 2024, the Committee for Medicinal Products for Human Use of the European Medicines Agency adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product Winrevair, intended for the treatment of pulmonary arterial hypertension. The applicant for this medicinal product is Merck Sharp & Dohme B.V. Sotatercept was approved for medical use in the European Union in August 2024. Economics Following its approval in 2024, the list price of Winrevair as single-vial and double-vial kit was announced at per vial, with an estimated annual cost of $240,000 a year. Names Sotatercept is the international nonproprietary name. Sotatercept is sold under the brand name Winrevair. Research It was initially developed to increase bone density but during its early development was found to increase hemoglobin and red blood cell counts, and was subsequently studied for use in anemia associated with multiple conditions including beta thalassemia and multiple myeloma. Development of this drug was superseded by the development of luspatercept (Reblozyl), a modified activin receptor type 2B (ACTRIIB-Fc) based ligand trap with improved properties for anemia. Hypothesizing that this drug might block the effects of activin in promoting pulmonary vascular disease, this molecule was found to inhibit vascular obliteration in multiple models of experimental pulmonary hypertension, providing rationale to reposition sotatercept for PAH in the PULSAR and STELLAR clinical trials for PAH. References Further reading External links Antihypertensive agents Drugs developed by Merck & Co. Peptides Orphan drugs
Sotatercept
[ "Chemistry" ]
897
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
75,221,052
https://en.wikipedia.org/wiki/Spinach%20%28software%29
Spinach is an open-source magnetic resonance simulation package initially released in 2011 and continuously updated since. The package is written in Matlab and makes use of the built-in parallel computing and GPU interfaces of Matlab. The name of the package whimsically refers to the physical concept of spin and to Popeye the Sailor who, in the eponymous comic books, becomes stronger after consuming spinach. Overview Spinach implements magnetic resonance spectroscopy and imaging simulations by solving the equation of motion for the density matrix in the time domain: where the Liouvillian superoperator is a sum of the Hamiltonian commutation superoperator , relaxation superoperator , kinetics superoperator , and potentially other terms that govern spatial dynamics and coupling to other degrees of freedom: Computational efficiency is achieved through the use of reduced state spaces, sparse matrix arithmetic, on-the-fly trajectory analysis, and dynamic parallelization. Standard functionality As of 2023, Spinach is cited in over 300 academic publications. According to the documentation and academic papers citing its features, the most recent version 2.8 of the package performs: Time-domain nuclear magnetic resonance (NMR) simulations of: Standard NMR experiments (DEPT, COSY, NOESY, HSQC, TOCSY, etc.). Protein and nucleic acid NMR experiments (HNCA, HNCOCA, HNCO, etc.). Magic angle spinning NMR experiments (CP-MAS, MQMAS, WISE, etc.). Experiments involving residual dipolar coupling and other residual order effects. Zero- and ultra-low field experiments, including Earth's field NMR. Nuclear quadrupole resonance, including overtone NMR. Time-domain magnetic resonance imaging (MRI) simulations, including: Standard and user-specified imaging sequences. Diffusion coefficient and diffusion tensor imaging. Three-dimensional point-resolved NMR spectroscopy. Ultrafast spatially encoded NMR spectroscopy. Time-domain electron spin resonance (ESR) simulations of: Standard pulsed ESR experiments (HYSCORE, ENDOR, ESEEM, etc.). Pulsed dipolar spectroscopy (DEER, RIDME, etc.). Dynamic nuclear polarization for static and spinning samples. Common models of spin relaxation (Redfield theory, stochastic Liouville equation, Lindblad theory) and chemical kinetics are supported, and a library of powder averaging grids is included with the package. Optimal control module Spinach contains an implementation the gradient ascent pulse engineering (GRAPE) algorithm for quantum optimal control. The documentation and the book describing the optimal control module of the package list the following features: L-BFGS quasi-Newton and Newton-Raphson GRAPE optimizers. Spin system trajectory analysis by coherence and correlation order. Spectrogram analysis of the pulse waveform. Prefixes, suffixes, keyholes, and freeze masks. Optimization of cooperative pulses and phase cycles. Waveform penalty functionals and instrument response. Dissipative background evolution generators and control operators are supported, as well as ensemble control over distributions in common instrument calibration parameters, such as control channel power and offset. References Computational chemistry software Physics software
Spinach (software)
[ "Physics", "Chemistry" ]
652
[ "Computational chemistry software", "Chemistry software", "Computational physics", "Computational chemistry", "Physics software" ]
75,223,644
https://en.wikipedia.org/wiki/Levilimab
Levilimab is an anti-IL-6 monoclonal antibody initially developed to treat rheumatoid arthritis. In 2020, it was approved as a treatment for COVID-19 in Russia. References Monoclonal antibodies COVID-19 drug development
Levilimab
[ "Chemistry" ]
55
[ "Pharmacology", "Drug discovery", "Medicinal chemistry stubs", "COVID-19 drug development", "Pharmacology stubs" ]
75,224,036
https://en.wikipedia.org/wiki/Osoresnontrine
Osoresnontrine (BI-409306) is a phosphodiesterase 9 inhibitor in development for schizophrenia, attenuated psychosis syndrome, and Alzheimer's disease. A preclinical study suggested that it increases memory in rodents. References PDE9 inhibitors
Osoresnontrine
[ "Chemistry" ]
62
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
75,224,127
https://en.wikipedia.org/wiki/Phosphodiesterase%209%20inhibitor
Phosphodiesterase 9 inhibitors or PDE9 inhibitors are a class of drugs that work by inhibiting the activity of PDE9. The first compound with this effect, BAY 73-6691, was reported in 2004. PDE9 inhibitors are under investigation for the treatment of obesity, hepatic fibrosis, Alzheimer's disease, schizophrenia, other psychotic disorders, heart failure, and sickle cell anemia. Drug candidates include CRD-733, osoresnontrine, tovinontrine, and PF-04447943. Cannabidiol acts as a PDE9 inhibitor in vitro. There are no PDE9 inhibitors that have been approved as of 2023. References PDE9 inhibitors
Phosphodiesterase 9 inhibitor
[ "Chemistry" ]
157
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
65,291,864
https://en.wikipedia.org/wiki/Slater%E2%80%93Pauling%20rule
In condensed matter physics, the Slater–Pauling rule states that adding an element to a metal alloy will reduce the alloy's saturation magnetization by an amount proportional to the number of valence electrons outside of the added element's d shell. Conversely, elements with a partially filled d shell will increase the magnetic moment by an amount proportional to number of missing electrons. Investigated by the physicists John C. Slater and Linus Pauling in the 1930s, the rule is a useful approximation for the magnetic properties of many transition metals. Application The use of the rule depends on carefully defining what it means for an electron to lie outside of the d shell. The electrons outside a d shell are the electrons which have higher energy than the electrons within the d shell. The Madelung rule (incorrectly) suggests that the s shell is filled before the d shell. For example, it predicts Zinc has a configuration of [Ar] 4s2 3d10. However, Zinc's 4s electrons actually have more energy than the 3d electrons, putting them outside the d shell. Ordered in terms of energy, the electron configuration of Zinc is [Ar] 3d10 4s2. (see: the n+ℓ energy ordering rule) The basic rule given above makes several approximations. One simplification is rounding to the nearest integer. Because we are describing the number of electrons in a band using an average value, the s and d shells can be filled to non-integer numbers of electrons, allowing the Slater–Pauling rule to give more accurate predictions. While the Slater–Pauling rule has many exceptions, it is often a useful as an approximation to more accurate, but more complicated physical models. Building on further theoretical developments done by physicists such as Jacques Friedel, a more widely applicable version of the rule, known as the generalized Slater–Pauling rule was developed. See also Spin states (d electrons) Ferromagnetism Metallic bonding References Electric and magnetic fields in matter Magnetism Electronic band structures
Slater–Pauling rule
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
409
[ "Electron", "Materials science stubs", "Electric and magnetic fields in matter", "Materials science", "Electronic band structures", "Condensed matter physics", "Condensed matter stubs" ]
65,293,114
https://en.wikipedia.org/wiki/A%20History%20of%20the%20Theories%20of%20Aether%20and%20Electricity
A History of the Theories of Aether and Electricity is any of three books written by British mathematician Sir Edmund Taylor Whittaker FRS FRSE on the history of electromagnetic theory, covering the development of classical electromagnetism, optics, and aether theories. The book's first edition, subtitled from the Age of Descartes to the Close of the Nineteenth Century, was published in 1910 by Longmans, Green. The book covers the history of aether theories and the development of electromagnetic theory up to the 20th century. A second, extended and revised, edition consisting of two volumes was released in the early 1950s by Thomas Nelson, expanding the book's scope to include the first quarter of the 20th century. The first volume, subtitled The Classical Theories, was published in 1951 and served as a revised and updated edition to the first book. The second volume, subtitled The Modern Theories (1900–1926), was published two years later in 1953, extended this work covering the years 1900 to 1926. Notwithstanding a notorious controversy on Whittaker's views on the history of special relativity, covered in volume two of the second edition, the books are considered authoritative references on the history of electricity and magnetism as well as classics in the history of physics. The original book was well-received, but it ran out of print by the early 1920s. Whittaker believed that a new edition should include the developments in physics that took part at the turn of the twentieth century and declined to have it reprinted. He wrote the second edition of the book after his retirement and published The Classical Theories in 1951, which also received critical acclaim. In the 1953 second volume, The Modern Theories (1900–1926), Whittaker argued that Henri Poincaré and Hendrik Lorentz developed the theory of special relativity before Albert Einstein, a claim that has been rejected by most historians of science. Though overall reviews of the book were generally positive, due to its role in this relativity priority dispute, it receives far fewer citations than the other volumes, outside of references to the controversy. Background The book was originally written in the period immediately following the publication of Einstein's Annus Mirabilis papers and several years following the early work of Max Planck; it was a transitional period for physics, where special relativity and old quantum theory were gaining traction. The book serves to document the developments of electricity and magnetism before the quantum revolution and the birth of quantum mechanics. Whittaker was an established mathematician by the publication of this book, and he brought unique qualifications to its authorship. As a teacher at Trinity College, beginning after his election as a fellow in 1896, Whittaker gave advanced lectures in spectroscopy, astrophysics, and electro-optics. His first book, Modern Analysis, was initially published in 1902 and remained a standard reference for applied mathematicians. His second major release, Analytical Dynamics, a mathematical physics textbook, was published in 1906 and was, according to Victor Lenzen in 1952, "still the best exposition of the subject on the highest possible level." Whittaker wrote the first edition in his spare time while he was thirty-seven years old, during which time he was serving as Royal Astronomer of Ireland from 1906 onwards. The post's relative ease allowed him to devote time to reading for the project, which he worked on until its release in 1910. During this same period, Whittaker also published the book The theory of optical instruments in 1907 as well as publish eight papers, with six in astronomy, during the same period. He also continued performing fundamental research in analytical dynamics at Trinity College in Dublin throughout this period. The original version of the book was universally praised and was considered an authoritative reference work in the history of physics, despite its difficulty to obtain past the 1920s. When the first edition of the book ran out of print, there was a long delay before the publication of the revised edition in 1951 and 1953. The delay was due, in Whittaker's own words, to his view that "any new issue should describe the origins of relativity and quantum theory, and their development since 1900". The task required more time than his career as a mathematician allowed for, so the project was put on hold until he retired from his professorship at the University of Edinburgh in 1946. From the age of Descartes to the Close of the Nineteenth Century The first edition of the book, written in 1910, gives a detailed account of the aether theories and their development from René Descartes to Hendrik Lorentz and Albert Einstein, including the contributions of Hermann Minkowski. The volume focuses heavily on aether theories, Michael Faraday, and James Clerk Maxwell, devoting each one or more chapters. It was well-received and established Whittaker as a respected historian of science. The book ran out of print and was unavailable for many years before the publication of the second edition, as Whittaker declined to reprint it. Published in the United States prior to 1925, the book is now in the public domain in the United States and has been reprinted by several publishers. Summary The book consists of twelve chapters that begin with a discussion on the theories of aether in the 17th century, focusing heavily on René Descartes, and end with a discussion of electronics and the theories of aether at the close of the 19th century, extensively covering contributions from Isaac Newton, René Descartes, Michael Faraday, James Clerk Maxwell, and J. J. Thomson. The book follows logical sequences of development, so the chapters are somewhat independent; the book is not fully chronological. The book uses vector analysis throughout and there is an explanatory table at the beginning of the book for those unfamiliar with vector notation. The first chapter covers the 17th-century development of the theory of aether. Beginning with Descartes' conjectures, the chapter focuses on contributions from Christiaan Huygens and Isaac Newton while it highlights the work of Petrus Peregrinus, William Gilbert, Pierre de Fermat, Robert Hooke, Galileo, and Ole Rømer. Chapter 2 covers the initial mathematical development of the magnetic field before the introduction of the vector potential and scalar potential, covering action at a distance. The third chapter covers galvanism, beginning with Luigi Galvani and extending through Georg Ohm's theory of the circuit. Chapter 4 covers the early developments of the luminiferous aether theories stretch from James Bradley to Augustin-Jean Fresnel. The fifth chapter covers the developments that mostly take place over the first half of the nineteenth century, with some contributions by Joseph Valentin Boussinesq and Lord Kelvin. Here the idea of the luminiferous aether is modelled as an elastic solid. Chapter 6 focuses almost exclusively on the experiments of Michael Faraday. Chapter seven discusses the mathematicians who worked after Faraday but before James Clerk Maxwell and who adopted views of action at a distance over Faraday's lines of force. The chapter includes a discussion of the contributions made by Franz Neumann, Wilhelm Eduard Weber, Bernhard Riemann, James Prescott Joule, Hermann von Helmholtz, Lord Kelvin, Gustav Kirchhoff, and Jean Peltier. Chapter 8 focuses on Maxwell's contributions to electromagnetism and Chapter 9 details further developments to the models of aether made after Maxwell's publications. Contributions by Lord Kelvin, Carl Anton Bjerknes, James MacCullagh, Bernhard Riemann, George Francis FitzGerald, and William Mitchinson Hicks. The tenth chapter covers physicists following in Maxwell's tracks in the mid-nineteenth century, with contributions from Helmholtz, Fitzgerald, Weber, Hendrik Lorentz, H. A. Rowland, J. J. Thomson, Oliver Heaviside, John Henry Poynting, Heinrich Hertz, and John Kerr. Chapter 11 covers the conduction in solids and gases extending from Faraday's work, covered in chapter six, to that of J. J. Thomson while the final chapter gives an account of the theories of aether in the late 1800s, ending with Owen Willans Richardson's work at the turn of the century. Reviews The book received several reviews in 1911, including one by the physicist C. M. Sparrow. Sparrow wrote that the book lives up to the legacy left by Whittaker's A Course in Modern Analysis and A Treatise on the Analytical Dynamics of Particles and Rigid Bodies. He then noted several expandable areas of the book before he went on to state: "That some slight errors or inaccuracies should creep into a book of this nature is to be expected, but the one or two we have observed are of too trivial a character to deserve mention, and affect in no way the general excellence of the work. The book is attractively printed and remarkably free from misprints." Another 1911 review of the book deemed it an "excellent volume" and predicted that it "will be welcomed by all physicists as a valuable contribution." A third 1911 review of the book praised it for its careful depiction of the developments, asserting "the treatment of the more important advances, without being [exhaustive], is sufficiently adequate to define them clearly in their historical setting". Among other reviewers, E. B. Wilson, in a 1913 review, noted one theory that Whittaker overlooked before going on to say: "To go into further detail with regard[s] to the contents of this History, which should and will be widely read, is needless. Suffice it to say that a careful study of all of the work twice, and of many portions of it several times, leaves but one resolution, namely, to continue the study indefinitely; for there is always something new to learn where so much material is so well presented." A second 1913 review, by Herbert Hall Turner wrote that the "book is probably the greatest act of piety towards the past which has been produced in this generation" and that it "would seem advisable to keep the book on one of the easily accessible shelves of the study, where it may be referred to constantly." The book also received a positive review in Italian in 1914. Several reviewers of the first volume of the second edition praised this edition in their reviews. A. M. Tyndall, wrote in 1951 that he remembered how pleasurable and enlightening reading this edition was forty-one years prior. Carl Eckart wrote in 1952 that the book "has been the authoritative reference work for the historical aspects of the theories of optics, electromagnetism, and the [a]ether." In 1952, Victor Lenzen wrote that the book was "without rival in its field." In his 1952 review, W. H. McCrea wrote that it "gave a superbly well-knit account of its subject". Extended and revised edition In 1951 (Vol. 1) and 1953 (Vol. 2), Whittaker published an extended and revised edition of his book in two volumes. The first volume is a revision of the original 1910 book while the second volume, published two years later, contains an extension of the history into the twentieth century, covering the years 1900 through 1926. The books are considered authoritative texts on the developments of classical electromagnetism and continue to be cited by widely adopted textbooks on the subject. A third volume, covering the years 1925 to 1950, was promised in the second edition but was never published, as Whittaker died in 1956. The two volumes provide an account of the historical development of the fundamental theories of physics and they are said to "contain the distilled essence of their author's reading and study over a period of more than half a century." The Classical Theories The first volume, subtitled The Classical Theories, was initially published in 1951 by Thomas Nelson and Sons. The book is a revision of the original 1910 book, with an added chapter on classical radiation theory, some new material, but remains focused on pre-1900 physics. The book has a similar scope as the first edition, though occasionally modified toward the beginning with more extensive edits towards the end. A reviewer noted that about 80 per cent of the book is a reproduction of the original edition, with revisions accounting for developments over the first forty years of the 20th century throughout. The work covers the development of optics, electricity, and magnetism, with some side-plots in the history of thermodynamics and gravitation, over three centuries, through the close of the nineteenth century. Overview (vol. 1) Chapter one of the first volume was renamed the theory of the aether to the death of Newton after being mostly rewritten, though it still focuses on René Descartes, Isaac Newton, Pierre de Fermat, Robert Hooke, and Christiaan Huygens, among others. The chapter begins with a discussion of physics from the initial formulations of space by René Descartes, which evolved into the aether theories, through the death of Newton, witnessing the first attempts at a wave theory of light by Hooke and Huygens. The new volume traces the early development of the aether theories back to the time of Aristotle. While there are many new paragraphs, references, and expanded footnotes throughout chapters two through eleven, much of the content remains the same as the first edition. Chapters two and three, as in the first edition, initiate the subject of electricity and magnetism, including Galvanism. Chapter two traces the history of electrostatics and magnetostatics from early developments through George Green's work on potential theory and his introduction of the vector potential and scalar potential. Chapter three, on Galvanism, discusses the history of electric current, centering on Galvani, Ohm, and Ampere. The fourth chapter, on the luminiferous medium, includes the discoveries of optical aberrations, polarization, and interference. This is the period of transition, from when Newton's corpuscular theory of light was widely held until the establishment of the wave theory after the experiments by Fresnel and Young. The fifth chapter records the development of theories modeling the aether as an elastic solid. Chapters six through eight present the development of electromagnetism as a line from Faraday to Maxwell, including the development of theories of electricity and magnetism modelled on Newtonian mechanics. The chapter was largely expanded from its 1910 counterpart. Chapters seven and eight were extensively rewritten with new material throughout. Chapter nine, on models of the aether, discusses, among others, contributions of Maxwell, William Thomson, James MacCullagh, Riemann, George Francis FitzGerald, and Hermann von Helmholtz, the preeminent physicists of the nineteenth century. The final three chapters pave the way for twentieth-century developments, to be described in the second volume. Chapter eleven was renamed to conduction in solutions and gases, from Faraday to the discovery of the electron in the new edition. Chapter twelve, titled classical radiation-theory is completely new and focuses on the empirical development of spectral series as well as the historical development of black body radiation physics. The final chapter, chapter eight, was renamed to classical theory in the age of Lorentz and contains new material, while omitting several details, saving them for the second volume. The chapter largely focuses on electric and thermal conduction and the Lorentz theory of electrons. The table of contents has been praised as being "extremely useful" for breaking down the chapters into sections that highlight the key developments. Reception (vol. 1) Arthur Mannering Tyndall, William Hunter McCrea, and Julius Miller reviewed the book upon its release in 1951. Arthur Tyndall noted his preference for the setup of the new edition and wrote that "if there are any mistakes or omissions in it, the reviewer was too immersed in the atmosphere of the book to notice them". Tyndall recommended the book for teachers who are looking to develop students' interest in the historical background of optics and electricity, as he believes a lot of the content can be directly incorporated into lectures and that students can be advised to read parts of the book in their undergraduate studies. In a second 1951 review, William McCrea stated that Whittaker had succeeded, "possibly more than any other historian of science", in imparting "a comprehensive and authentic impression of that wherein the great pioneers were truly great", which allowing the reader to "see their work, with its lack of precedence, against the background of strangely assorted experimental data and of contemporary conflicting general physical concepts" and "to see how they yet contributed each his share to what we are bound to recognize as permanent progress". McCrea praised the book by saying "[n]o better factual account exists to show how hardly won this progress has been." In a second review, published in 1952, McCrea stated "[o]ut of the riches of his mathematical and historical scholarship, Sir Edmund Whittaker has given us a very great book." In his review, Julius Miller claimed that the book was beyond review, saying it sufficed to note that "it is the work of a foremost scholar of this century and the last—a physicist, philosopher, mathematician." Miller noted that while it is primarily a history book, it is also "philosophy, physics, and mathematics of the first temper" and that it gives an "elegant penetrating examination of The Classical Theories". He also noted that although it is "heavy reading", the work is "delightfully clear" and that the "documentation is astonishing". Among others, Carl Eckart, Victor Lenzen, John Synge, Stephen Toulmin, Edwin C. Kemble, and I. Bernard Cohen reviewed the book in 1952. Carl Eckart opened his review by praising the first edition of the book and writing: "This second edition will almost certainly continue to occupy the same position for many years to come." Eckart noted that the book was ambitious, but it was carried out with "unusual success" using the same clarity and elegance which had made Whittaker famous. He went on to say that the book is a "true history of ideas" which has been and will continue as a "most influential book". In his review, Victor Lenzen stated that he "knows of no work on physical theories which is comparable to the present one in the analytical and critical discussion of the mathematical formulation of the theories." His review closes by stating that the book is a testament to the "boundless intellectual curiosity" which drives humankind to understand the universe where we live. In a third 1952 review, John Synge noted that the book is "backed by a vast erudition", but is not overpowering and that "the style is sprightly and the author is singularly successful in putting himself and the reader in the place of each physicist". Synge goes on to say that Whittaker, with great skill, was able to "mingle the atmosphere of contemporary confusion which always accompanies scientific progress with an appreciation of what is actually going on, as viewed in light of later knowledge." Stephen Toulmin, in his review, refers to Whittaker's original edition as a standard reference, but noted that a supplement was almost immediately required to cover later developments. Toulmin went on to state that physicists in the first half of the twentieth century had a difficult time "keeping afloat on the tide of new theories and discoveries" and that Whittaker's position historian of science had been "quite inaccessible", and so "we are lucky in having Professor Whittaker once more as our guide." Edwin Kemble, in a fifth 1952 review, stated that the book was "in a class by itself" and summarized it as a "high-level account" of the steps in the development of the classical theory of electromagnetism that it is "well documented and extraordinarily comprehensive." In his review, I. Bernard Cohen wrote that he knew "of no other history of electricity which is as sound as Whittaker's", though he noted several improvements that he wished Whittaker had made in updating the 1910 classic. Analysis (vol. 1) Arthur Tyndall, in his 1951 review, stated the book is "rich in experimental fact", with comparatively fewer mathematical sections, with notable exceptions such as those on Lorentz and Maxwell, saying that "this new volume is not a heavy treatise in theoretical physics, as perhaps its name might suggest". William McCrea noted that the book is "a history of theories", but also provides "very clear statements of the experimental discoveries at all stages." He goes on to note that the book focuses on the developments of the aether theories and electricity, which McCrea states are the most fundamental parts of physics, but is also informative in other relevant areas of physics, such as elasticity and thermodynamics. Some reviewers commented on the new chapter on classical radiation theory, including Tyndall who notes that the material was barely covered in the first edition and was a natural addition that helps pave the way for the second volume and Cart Eckart who says that the history of spectra and thermal radiation is "given its proper place in the historical perspective." Several reviewers criticized the book for certain omissions, including Eckart, who criticized Whittaker for leaving out Euclid and Lobatchewsky and points to this and the fact that Whittaker continued to write about the aether from a nineteenth century perspective as defects he would have ignored in a lesser volume. Victor Lenzen states that he disagrees with Whittaker on a point of emphases, especially as it relates to not mentioning Joseph Henry outside a single footnote. He also mentions Whittaker's distinction between Platonic and Aristotelian philosophies where he says Whittaker sides with Aristotle's empirical methods, while he believes that Plato was more prophetic of the future of mathematical methods in science. The Modern Theories (1900–1926) The second volume, subtitled The Modern Theories (1900–1926), was originally published in 1951 by Thomas Nelson and Sons. The book is the continuation of Whittaker's survey of the history of physics into the period 1900–1926 and describes the revolution in physics over the first quarter of the 20th century. The major historical developments covered in the book include the special theory of relativity, old quantum theory, matrix mechanics, and Schrödinger's equation and its use in quantum mechanics, referred to as "wave mechanics". Chapter two of the book is highly controversial, and constitutes Whittaker's major role in the relativity priority dispute. Whittaker's view on the history of special relativity is that Lorentz and Poincare had successfully developed the theory before Einstein and that priority belonged to them. Despite Whittaker's objection, scientific consensus remains strongly in favor of Einstein's priority on the theory, with authors noting that while the theories of Poincare and Lorentz are mathematically and experimentally equivalent to Einstein's theory, they are not based on the relativistic postulates and do not constitute what is now known as Einstein's relativity. While parts of the book have received notable praise, due to its role in the historical controversy, the book overall has been said to fall short of the standards of the others and it has historically received many fewer citations. Overview (vol. 2) The first chapter, the age of Rutherford, discusses the state of empirical physics at the turn of the twentieth century. Chapter two discusses is on the origins of special relativity and is highly controversial, and is the base of Whittaker's role in the relativity priority dispute. In this chapter, as the title suggests, Whittaker gives priority for special relativity to Hendrik Lorentz and Henri Poincaré as opposed to the generally accepted crediting of Albert Einstein, a point for which Whittaker has been rebuked by many scholars. Chapters three and four detail the developments of old quantum theory and deal mostly with "complicated experimental facts and their preliminary explanations". Chapter three covers early developments in old quantum theory, discussing Max Planck's contributions to physics and touching on Einstein and Arnold Sommerfeld. Chapter four, on spectroscopy in old quantum theory, discusses many of Niels Bohr's precursors, including Arthur W. Conway, Penry Vaughan Bevan, John William Nicholson, and Niels Bjerrum. Chapter five switches to gravitation, discussing the history of cosmology and the general theory of relativity. Chapter six returns to quantum theory and describes the connection between older and more modern concepts in physics, discussing phenomena and theories such as Louis de Broglie's matter waves, Bose statistics, and Fermi statistics. The final two chapters give an account of the birth of quantum mechanics. Matrix mechanics is discussed in chapter eight, including the Heisenberg picture and the introduction of physical operators. Erwin Schrödinger, the Schrödinger picture, and Schrödinger's equation are all discussed in the final chapter. Reception (vol. 2) In a 1954 book review of the second volume, Max Born praised both volumes of the expanded and revised second edition, saying "[t]his second volume is a magnificent work, excellent not only through a brilliant style and clarity of expression, but also through an incredible scholarship and erudition" and that "this work makes us look forward keenly to the promised third volume". Born believes that a book like this one is a "most essential contribution to our literature and should be read by every student of physics and of al sciences connected with physics, including scientific history and philosophy." Born singles out chapters three and four on the development of old quantum theory, calling them "the most amazing feats of learning, insight, and discriminations". He also singles out chapter five, on gravitation, as being "perfect" due to Whittaker's own scholarship in the field, going on to say it is "the most readable and elucidating short presentation of general relativity and cosmology". In his 1956 book Physics in My Generation, Born goes on to call it an "excellent book" and talks about using the first edition as a reference when he was a student. Freeman Dyson, in a 1954 review, said the second volume is "more limited and professional in its scope" than the first volume, giving a "clear, logical account of the sequence of events in the intellectual struggles which led up to relativity and quantum mechanics." He calls the volume a "mathematical textbook" on the theory of relativity and quantum mechanics, emphasizing a historical approach, as it explains all the necessary mathematics. He states that "Whittaker's two volumes reflect faithfully the different climates of science in the two periods they cover" and goes on to say that although he is unable to comment on the book's historical accuracy, he thinks "it is likely that this is the most scholarly and generally authoritative history of its period that we shall ever get." In the opening remarks of his 30 November 1954 address to the Royal Society, president Edgar Adrian states that Whittaker is perhaps the most well-known British mathematician of the time, due to his "numerous, varied, and important contributions" and the offices which he had held, but that of all his works, this History is probably the most important, while he notes that Whittaker's books on analytical dynamics and modern analysis have been widely influential both in the UK and internationally. He singles out the then-recently published second volume as a "great work" which gives "a critical appreciation of the development of physical theory up to the year 1925." He goes on to say that all of Whittaker's writings showcase his "powers of arrangement and exposition" which are of "a most unusual order". He closes by saying that the "astonishing quantity and quality of his work is probably unparalleled in modern mathematics and it is most appropriate that the Royal Society should confer on Whittaker its most distinguished award", referring to Whittaker's receipt of the Copley Medal in 1954. In a 1954 review Rolf Hagedorn states that "One need read only a few pages of the book to sense the thoroughness and conscientiousness of the whole work". He states the book is an invaluable reference and that it is "essential for any library". He goes on to say that Whittaker "brings the reader to real understanding by a coherent mathematical description enabling him to follow the development step by step" and that the "clarity and didactic construction make it a pleasure to follow". In another William Fuller Brown Jr. notes that the book is a history of published papers rather than a history of the scientists who published them, but goes on to say that the book is illuminating and the reader "will get from it a better appreciation of the process of scientific discovery. Among others, Science posted a review of the book that opened with: "The present volume is not, as the title would suggest, merely a 26-year extension of the work originally written by Sir. Edmond Whittaker under the same title in 1910. It is, rather, a thorough and authoritative chronicle of the development of theoretical physics from in the period 1900–1926, including atomic structure, special relativity, [old] quantum theory, general relativity, matrix mechanics, and wave mechanics". A review by P. W. Bridgman in 1956 says "The readers first impression at this formidable treatise, I believe, will almost invariably be one of stupefaction at the industry and versatility of the author, who has been able to assimilate and critically review so much." He goes on to say that older physicists would also "find it an epitome" of their "own experience", and that it would recount for them "many critical situations". Analysis (vol. 2) In a September 1953 letter to Albert Einstein published in 1971, Max Born writes that, other than the relativity priority issues, it was "particularly unpleasant" for him that Whittaker "had woven all sorts of personal information into his account of quantum mechanics" while Born's role in the development was "extolled". But states in the commentary in 1971 that the book is "a brilliant and historic philosophical work" which he found "extremely useful" in his earlier years. In a 1954 book review, Born praises the book for its "extremely careful" record of "obscure or forgotten papers which contain some essential new idea though perhaps in an imperfect form". And points out that the last two chapters of the book give a "detailed and lively account of the birth of quantum mechanics in both of its forms, matrix mechanics and wave mechanics." He also praises Whittaker for setting aside his philosophical interests, saying "Whittaker the conscientious historian of science, has the upper hand over Whittaker the metaphysician, and it is just this feature which makes the book a safe guide through the tangle of events". Born states that the title of the second chapter, or "the historical view expressed by it", is the only point where Born does not share Whittaker's opinion. Born also points out that the book goes beyond what ordinary textbooks can do, which he believes offer students "the shortest and simplest way to knowledge and understanding", and "are in cases not only unhistorical but a distortion of history". Freeman Dyson, in his 1954 review, remarks that the second volume has, by necessity, a "very different style from the first" due to the rapid mathematical development in the early 1900s. He summarizes the first volume as a description of "historical accidents", which resulted in changes in the way scientists thought about the problems, with discussions of the connections between physics and the more general philosophical climate of the times, while saying the second volume covers the history of physics when the progress was determined by the "speed with which observations could be understood and expressed in exact mathematical terms". In his 1954 Nature review, Rolf Hagedorn notes that readers should be familiar with the book differential, integral calculus, and linear algebra, saying "is not written for the layman interested in the history of science, and certainly does not belong to the category of popular science books." He praises the book for justifying each statement with "at least one quotation", stating he estimates the total to be greater than one thousand. He goes on to say that "it is inconceivable that an author with such a profound knowledge of his sources could have overlooked any important fact." He also acknowledges that the book is sometimes hard to read due to the "condensed style" as well as "the fact that he often employs the nomenclature used in original work instead of that which would be used to-day." In his 1956 book review, P. W. Bridgman states that it is "doubtless" that the most controversial part of the book is in giving priority to Lorentz and Poincare for special relativity, but chooses not to defend the priority of Einstein, deferring the readers to Max Born's responses. He does state that it "is to be remembered, however, that Whittaker was in the thick of things during the development of the theory, and there is much forgotten history". He praises Whittaker for highlighting the "little known pre-history" for the mass-energy relation. Bridgman also notes that the volume does not discuss whether the "aether" should be considered superfluous in light of the special and general theories of relativity, but notes the preface to the original edition argues to keep the word aether to describe the quantum vacuum. In relation to the early development of general relativity and the equivalence principle, Roberto Torretti, in his 1983 book, criticized Whittaker for attributing to Max Planck the implication that "all energy must gravitate" even though Planck's 1907 paper was "saying the opposite" according to Torretti. Special relativity priority dispute In the second volume, a chapter titled "The Relativity Theory of Poincaré and Lorentz" credits Henri Poincaré and Hendrik Lorentz for developing special relativity, and especially alluded to Lorentz's 1904 paper (dated by Whittaker as 1903), Poincaré's St. Louis speech (The Principles of Mathematical Physics) of September 1904, and Poincaré's June 1905 paper. He attributed to Einstein's special relativity paper only little importance, which he said "set forth the relativity theory of Poincaré and Lorentz with some amplifications, and which attracted much attention". Roberto Torretti states, in his 1983 book Relativity and Geometry, "Whittaker's views on the origin of special relativity have been rejected by the great majority of scholars", citing Max Born, Gerald Holton, Charles Scribner, Stanley Goldberg, Elie Zahar, Tetu Hirosige, Kenneth F. Schaffner, and Arthur I. Miller. While he notes that G. H. Keswani sides with Whittaker, though "he somewhat tempers the latter's view". Miller, in his 1981 book, writes that the "lack of historic credibility" of the second chapter had been "demonstrated effectively" by Holton's 1960 article on the origins of special relativity. Max Born rebuttals Born wrote a letter to Einstein in September 1953 where he explained to Einstein that Whittaker, a friend of his, was publishing the second volume which is "peculiar in that Lorentz and Poincare are credited" with the development of special relativity while Einstein's papers are treated as "less important". He goes on to tell Einstein that he had done all he could over the previous three years to "dissuade Whittaker from carrying out his plan", mentioning that Whittaker "cherished" the idea and "loved to talk" about it. He told Einstein that Whittaker insists that all the important features were developed by Poincare while Lorentz "quite plainly had the physical interpretation". Born said this annoyed him as Whittaker is a "great authority in the English speaking countries" and was worried that "many people are going to believe him". Einstein reassures Born that there is nothing to worry about in an October response, saying "Don't lose any sleep over your friend's book. Everybody does what he considers right or, in deterministic terms, what he has to do. If he manages to convince others, that is their own affair." He states that he does not find it sensible to defend the results of his research as somehow belonging to him. In the 1971 commentary on this response Born says that Einstein's response simply proves his "utter indifference to fame and glory". In his 1954 book review, Born states that "there is much to be said in favour of Whittaker’s judgment. From the mathematical standpoint the Lorentz transformations contain the whole of special relativity, and there seems to be no doubt that Poincare was, perhaps a little ahead of Einstein, aware of most of the important physical consequences". Though he goes on to side with the "general use in naming relativity after Einstein", though "without disregarding the great contributions of Lorentz and Poincare." Born expands on these thoughts in his 1956 book, where he points out a response from Einstein to Carl Seelig in which Einstein was asked about the scientific literature which most influenced his special theory of relativity. Einstein points out that he knew only the work by Lorentz from the 1890s. Born says this "makes the situation perfectly clear." He points out that the 1905 papers on relativity and light quantum were connected, and the research was independent of Lorentz’ and Poincare's later work. He goes on to highlight Einstein's "audacity" in "challenging Isaac Newton’s established philosophy, the traditional concepts of space and time." This, for Born, "distinguishes Einstein’s work from his predecessors and gives us the right to speak of Einstein’s theory of relativity, in spite of Whittaker’s different opinion." George Holton rebuttal In his explicit rebuttal of 1960, Holton notes that Einstein's paper "was indeed one of a number of contributions by many different authors", but goes on to point out that Whittaker's assessment was lacking and plainly wrong at places. He notes that crediting Lorentz with a 1903 rather than 1904 paper was "not merely a mistake", but rather is at least a "symbolic mistake" that is "symbolic of the way a biographer's preconceptions interact with his material." He goes on to say that Whittaker insinuated that Einstein's work was based on Lorentz's despite the statements by Einstein and his colleagues to the contrary, and that there were multiple pieces of evidence in the 1905 paper that implies Einstein did not know of Lorentz's later work, including the fact that Einstein derived the Lorentz transform while Lorentz assumed it and that Einstein was acute in giving credit to others whose work influenced his own. He also points out a key difference between the papers in which Einstein argues that the "laws of electrodynamics and optics" were "valid in all frames of reference" to the order of , whereas Lorentz claimed, as a "key point" in his 1904 paper, "to have extended the theory to the second order in ". He notes finally that Planck had pointed out in 1906 that Einstein's expression for the mass of charged particles was "far less suitable than Lorentz's". Holton goes on to note the "equally significant fact" that Lorentz's paper was "not on the special relativity as we understand the term since Einstein", as his "fundamental assumptions are not relativistic". He goes on to say that Lorentz never claimed credit for relativity and in fact referred to it as Einstein's relativity. He notes finally that Lorentz's formulation was valid only for small , but the point of Einstein's theory was general validity. Holton has written other works on the history of special relativity as well, defending Einstein's priority. Rebuttals from other notable scholars Roberto Torretti, in his 1983 book, notes the theory set out by Poincare and Lorentz was both "experimentally indistinguishable from and mathematically equivalent to" Einstein's On the Electrodynamics of Moving Bodies, but their philosophy is very different than the special relativity of Einstein. Torretti notes that their theory, in stark contrast to Einstein's, relies on the assumption of an aether which interacted with systems moving across it, affecting the clocks shrinking bodies. He goes on to note that it is doubtless that Einstein could have drawn inspiration from the works of Poincare, He points out that Poincare's theory was not universally applicable like Einstein's and that it does not rest on a modification of the notions of space and time. He also mentions that Lorentz regularly referred to the theory as Einstein's, but that Poincare never truly became a relativist, who referred to the theory as Lorentz's. Torretti notes that Poincare's failure to catch on was his notorious conventionalism, and the fact that he may have been a little too proud to admit that "he had lost the glory of founding 20th-century physics to a young Swiss patent clerk." Charles Scribner, in his 1984 article Henri Poincaré and the Principle of Relativity, stated his belief that Whittaker's view on the matter "fails to do justice to the available historical evidence" and notes that it may also "create obstacles for students". He continues saying "Einstein played a unique role in establishing the universal validity of the principle of relativity and in revealing and capitalizing on its radical implications." He notes several of the points later raised by Holton in his 1960 rebuttal, including discrepancy in powers of and that Poincare never truly accepted the theory in the manner Einstein had put forward. The controversy is mentioned in other books on the history of science as well. In his book Subtle is the Lord, Abraham Pais, wrote a scathing review of Whittaker, writing the treatment of special relativity "shows how well the author's lack of physical insight matches his ignorance of the literature", phrasing that was rebuked by at least one notable reviewer as "scurrilous" and "lamentable". Somewhat paradoxically, he also states that both he and his colleagues believe Whittaker's original edition "is a masterpiece". He further notes that he would not have felt the need to comment if the book had not "raised questions in many minds about the priorities in the discovery of this theory". A more sympathetic review come from Clifford Truesdell, who wrote that Whittaker "aroused colossal antagonism by trying to set the record straight on the basis of print and record rather than recollection and folklore and professional propaganda,…", in his 1984 book An Idiot's Fugitive Essays on Science Long term impact In one of Whittaker's 1958 obituaries, William McCrea remarked that the books are achievements so remarkable that "as time passes, the risk will be of all Whittaker's other great achievements tending to be overlooked in comparison." He predicts that future readers would "have difficulty" in acknowledging it was only the result of "a few years at both ends of a career of the highest distinction in other pursuits." In a 1956 obituary, Alexander Aitken calls the book series Whittaker's "magnum opus", amid a career of distinction, and expresses regret that Whittaker was unable to complete the promised third volume. Other obituaries include one that claims that the two volumes of the second edition "form Whittaker's magnum opus", amid many other distinctions, including 4 standard works other than the History. In a fourth obituary the work is said to be "brilliant" and a "colossal undertaking involving wide reading and accurate understanding". The book was included in a curated 1958 list of "important books on science" in a Science article by Ivy Kellerman Reed and Alexander Gode, where the volumes are said to be the "first exhaustive history of the classical and modern theories of aether and electricity". In 1968, John L. Heilbron states that the "great value" of Whittaker's second volume on quantum mechanics lies in its ability to connect developments in quantum mechanics with those in other fields as well as its "rich citations", going on to recommend readers it and several other books on the history of science. John David Jackson recommends both volumes to his readers in the preface of the first edition of the famous graduate textbook Classical Electrodynamics (1962), which has been reprinted in all later editions, including the standard third edition of 1999. Jackson give a brief account of the history of the mathematical development of electrodynamics and says the "story of the development of our understanding of electricity and magnetism is, of course, much longer and richer than the mention of a few names from one century would indicate." He goes on to tell his readers to consult both "authoritative" volumes for a "detailed account of the fascinating history". In a 1988 Isis review of a combined reprint of the second edition, including both the first and second volumes bound together, published in New York by the American Institute of Physics and Tomash Publishers in 1981, science historian Bruce J. Hunt says that the books stand up "remarkably well" to time and that it is unlikely that others would try to write such books in modern times, as the "encyclopedic sweep is too broad" and the "purely internalist focus too narrow" for recent trends, though he says "we can be glad that someone did write it" and that it is, perhaps, fortunate that Whittaker did so such a long time ago. He goes on to state his appreciation for the new reprint. In contrast to the first volume on The Classical Theories, Hunt notes that the second volume, The Modern Theories, is "rarely cited today, except in connection with this controversy" and that it has had "relatively little influence" on later publications in the history of modern physics. He goes on to say the first volume "continues to be a standard reference". He says that book's greatest weakness is that it lacks a "real historical sense", that it misses wider contexts and is therefore incomplete, as it focuses on theories rather than people. Hunt closes by noting that the book is, in many ways, a "relic of a past age", but remains "very useful" when "approached critically" and praises Whittaker as "one of the last and most thoughtful of the great Victorian mathematical physicists." In a 2003 review of a book by the French science historian Olivier Darrigol, L. Pearce Williams compares the newer book with Whittaker's second edition, which he calls "old but still valuable". In 2007 Stephen G. Brush included the second volume of the second edition in a curated list of books on the history of light-quantum developments, such as black body radiation. Others scholars have singled out the original volume, including Darrigol who, in a 2010 article, highlighted the work as an authoritative reference and Abraham Pais who states that both him and his colleagues believe the book to be a "masterpiece" in his 1982 book on Einstein. Release details First edition The book was originally published in 1910 by Longmans, Green, and co. in London, New York, Bombay, and Calcutta, and by Hodges, Figgis, and co. in Dublin. It was out of print by the 1920s and was notoriously difficult to obtain thereafter. It was part of the Dublin University Press and Landmarks of Science series of books. As it was registered with the U.S. copyright office prior to 1925, the book is now in the public domain in the United States and can be found on the Internet Archive free of charge and is free to be reprinted. Second edition Original printing of the first volume:— Original printing of the second volume:— First reprinting of the edition, combines both volumes as one:— Reprint by the American Institute of Physics and Tomash Publishing:— Reprint by Dover Publications:— See also The Maxwellians:—Book by Bruce J. Hunt detailing the development of electromagnetism in the years after the publication of Maxwell's Treatise Timeline of electromagnetism and classical optics:—Dynamic list of major developments in the history of electromagnetism and history of optics History of chemistry:—The history of chemistry from ancient through modern times History of electrical engineering:—Details the development of electrical engineering :–Succinct expression of the principle of relativity with a classical geometric notion References Works cited Relativity priority Notable reviews Further reading External links 1910 non-fiction books 1951 non-fiction books 1953 non-fiction books Aether theories Books about the history of physics History of electrical engineering History of optics History of thermodynamics Longman books Thomas Nelson (publisher) books Reference works Books by E. T. Whittaker
A History of the Theories of Aether and Electricity
[ "Physics", "Chemistry", "Engineering" ]
10,217
[ "Electrical engineering", "History of thermodynamics", "Thermodynamics", "History of electrical engineering" ]
65,297,669
https://en.wikipedia.org/wiki/Plant%20cryopreservation
Plant cryopreservation is a genetic resource conservation strategy that allows plant material, such as seeds, pollen, shoot tips or dormant buds to be stored indefinitely in liquid nitrogen. After thawing, these genetic resources can be regenerated into plants and used on the field. While this cryopreservation conservation strategy can be used on all plants, it is often only used under certain circumstances: 1) crops with recalcitrant seeds e.g. avocado, coconut 2) seedless crops such as cultivated banana and plantains or 3) crops that are clonally propagated such as cassava, potato, garlic and sweet potato. History The history of plant cryopreservation started in 1965 when Hirai was studying the biology activities that happened when biological samples were frozen. Three years later, there was the first successful attempt cryopreserving callus cells. The following years, new methods to cryopreserve were developed, such as direct immersion, slow freezing and vitrification, as well as applied to more and more plants species and plant tissues. Methods Direct immersion. This is the immersion of plant material directly in liquid nitrogen, or after desiccation. This is often done with (orthodox) seeds that already have a low moisture content or pollen. This method cannot be used with tissues that contain a lot of water or are sensitive to dehydration. Slow freezing. This method relies on the mechanism of freeze dehydration to pull water out of the cells and thus prevent ice formation in the cell. Vitrification. By freezing at an ultra-fast rate and using osmotic dehydration, the water that is still present in the cell is unable to form crystals and will be part of a glass-like or vitrified solution. This method can be further split in different variants e.g. droplet vitrification, encapsulation dehydration and plate vitrification. These techniques were successfully used with several economically-important crops, such as chrysanthemum or bleeding heart. Hurdles and limitations Aside from the challenges involved with cryopreservation in general, an important hurdle, when developing cryopreservation protocols for storage of plant germplasm, is that plants within a species can have a different tolerance to cryopreservation. This difference seems to be linked with the drought resistance of the different cultivars within the species. Even within the plant itself there can be noticeable differences depending on the tissue that is used, as both structure and composition play an important role during cryopreservation. Organizations relying on plant cryopreservation Alliance of Bioversity International and Ciat International Potato Center Huntington Garden Rural development administration of Korea Leibniz Institute of Plant Genetics and Crop Plant Research References Cryopreservation Plant conservation
Plant cryopreservation
[ "Chemistry" ]
584
[ "Cryopreservation", "Cryobiology" ]
66,531,933
https://en.wikipedia.org/wiki/Infrastructure%20Transitions%20Research%20Consortium
The UK Infrastructure Transitions Research Consortium (ITRC) was established in January 2011. The ITRC provides data and modelling to help governments, policymakers and other stakeholders in infrastructure make more sustainable and resilient infrastructure decisions. It is a collaboration between seven universities and more than 55 partners from infrastructure policy and practice. During its first research programme, running from 2011 to 2016, ITRC developed the world's first national infrastructure system-of-systems model, known as NISMOD (National Infrastructure Systems Model) which has been used to analyse long-term investment strategies for energy, transport, digital communications, water, waste water and solid waste. This work is described in the book The Future of National Infrastructure, an introduction to the NISMOD models and tools describing their application to inform infrastructure planning in Britain. The second phase of this programme (2016–2021) is called ITRC-MISTRAL where MISTRAL stands for Multi-Scale Infrastructure Systems Analytics. MISTRAL allowed ITRC to develop the national-scale modelling in ITRC to simulate infrastructure at city, regional and global scales. Based in the University of Oxford's Environmental Change Institute, ITRC is led by Director Jim Hall who is also Professor of Environmental Risks at the University of Oxford. Funding: The ITRC is funded by two programme grants from the UK Engineering and Physical Science and Research Council (EPSRC). The 2011–2016 ITRC programme grant was £4.7m and the 2016–2021 grant, for ITRC-MISTRAL, is £5.4m. Consortium: The seven universities making up the ITRC consortium are: University of Southampton, University of Oxford, Newcastle University, Cardiff University, University of Cambridge, University of Leeds and University of Sussex. Partners: ITRC's partners are from across the infrastructure sector. They include infrastructure investors such as the World Bank, consultancies including Ordnance Survey and KPMG, providers such as Siemens, High Speed 2 (HS2), Network Rail and National Grid, policy-makers (i.e. Environment Agency) and regulatory bodies (OFCOM). References External links Official website Climate action plans Engineering and Physical Sciences Research Council Infrastructure Infrastructure by country Sustainability Technology consortia
Infrastructure Transitions Research Consortium
[ "Engineering" ]
448
[ "Construction", "Infrastructure" ]
66,540,523
https://en.wikipedia.org/wiki/Schamel%20equation
The Schamel equation (S-equation) is a nonlinear partial differential equation of first order in time and third order in space. Similar to a Korteweg–De Vries equation (KdV), it describes the development of a localized, coherent wave structure that propagates in a nonlinear dispersive medium. It was first derived in 1973 by Hans Schamel to describe the effects of electron trapping in the trough of the potential of a solitary electrostatic wave structure travelling with ion acoustic speed in a two-component plasma. It now applies to various localized pulse dynamics such as: electron and ion holes or phase space vortices in collision-free plasmas such as space plasmas, axisymmetric pulse propagation in physically stiffened nonlinear cylindrical shells, "Soliton" propagation in nonlinear transmission lines or in fiber optics and laser physics. The equation The Schamel equation is where stands for . In the case of ion-acoustic solitary waves, the parameter reflects the effect of electrons trapped in the trough of the electrostatic potential . It is given by , where , the trapping parameter, reflects the status of the trapped electrons, representing a flat-topped stationary trapped electron distribution, a dip or depression. It holds , where is the wave amplitude. All quantities are normalized: the potential energy by electron thermal energy, the velocity by ion sound speed, time by inverse ion plasma frequency and space by electron Debye length. Note that for a KdV equation is replaced by such that the nonlinearity becomes bilinear (see later). Solitary wave solution The steady state solitary wave solution, , is given in the comoving frame by: The speed of the structure is supersonic, , since has to be positive, , which corresponds in the ion acoustic case to a depressed trapped electron distribution . Proof by pseudo-potential method The proof of this solution uses the analogy to classical mechanics via with , being the corresponding pseudo-potential. From this we get by an integration: , which represents the pseudo-energy, and from the Schamel equation: . Through the obvious demand, namely that at potential maximum, , the slope of vanishes we get: . This is a nonlinear dispersion relation (NDR) because it determines the phase velocity given by the second expression. The canonical form of is obtained by replacing with the NDR. It becomes: The use of this expression in , which follows from the pseudo-energy law, yields by integration: This is the inverse function of as given in the first equation. Note that the integral in the denominator of exists and can be expressed by known mathematical functions. Hence is a mathematically disclosed function. However, the structure often remains mathematically undisclosed, i.e. it cannot be expressed by known functions (see for instance Sect. Logarithmic Schamel equation). This generally happens if more than one trapping scenarios are involved, as e.g. in driven intermittent plasma turbulence. Non-integrability In contrast to the KdV equation, the Schamel equation is an example of a non-integrable evolution equation. It only has a finite number of (polynomial) constants of motion and does not pass a Painlevé test. Since a so-called Lax pair (L,P) does not exist, it is not integrable by the inverse scattering transform. Generalizations Schamel–Korteweg–de Vries equation Taking into account the next order in the expression for the expanded electron density, we get , from which we obtain the pseudo-potential -. The corresponding evolution equation then becomes: which is the Schamel–Korteweg–de Vries equation. Its solitary wave solution reads with and . Depending on Q it has two limiting solitary wave solutions: For we find , the Schamel solitary wave. For we get which represents the ordinary ion acoustic soliton. The latter is fluid-like and is achieved for or representing an isothermal electron equation of state. Note that the absence of a trapping effect (b = 0) does not imply the absence of trapping, a statement that is usually misrepresented in the literature, especially in textbooks. As long as is nonzero, there is always a nonzero trapping width in velocity space for the electron distribution function. Logarithmic Schamel equation Another generalization of the S-equation is obtained in the case of ion acoustic waves by admitting a second trapping channel. By considering an additional, non-perturbative trapping scenario, Schamel received: , a generalization called logarithmic S-equation. In the absence of the square root nonlinearity, , it is solved by a Gaussian shaped hole solution: with and has a supersonic phase velocity . The corresponding pseudo-potential is given by . From this follows which is the inverse function of the Gaussian mentioned. For a non-zero b, keeping , the integral to get can no longer be solved analytically, i.e. by known mathematical functions. A solitary wave structure still exists, but cannot be reached in a disclosed form. Schamel equation with random coefficients The fact that electrostatic trapping involves stochastic processes at resonance caused by chaotic particle trajectories has led to considering b in the S-equation as a stochastic quantity. This results in a Wick-type stochastic S-equation. Time-fractional Schamel equation A further generalization is obtained by replacing the first time derivative by a Riesz fractional derivative yielding a time-fractional S-equation. It has applications e.g. for the broadband electrostatic noise observed by the Viking satellite. Schamel–Schrödinger equation A connection between the Schamel equation and the nonlinear Schrödinger equation can be made within the context of a Madelung fluid. It results in the Schamel–Schrödinger equation. and has applications in fiber optics and laser physics. References External links www.hans-schamel.de : further information by Hans Schamel Partial differential equations Plasma physics equations Ionosphere Space plasmas Waves in plasmas
Schamel equation
[ "Physics" ]
1,263
[ "Waves in plasmas", "Space plasmas", "Physical phenomena", "Equations of physics", "Plasma phenomena", "Astrophysics", "Waves", "Plasma physics equations" ]
68,046,884
https://en.wikipedia.org/wiki/Minimally%20manipulated%20cells
Minimally manipulated cells are non-cultured (non-expanded) cells isolated from the biological material by its grinding, homogenization or selective collection of cells, which undergo minimal manipulation. Minimally manipulated cells are usually using for the treatment of skin ulceration, alopecia, and arthritis. Minimally manipulated cells can be used for the intraoperative creation of tissue-engineered grafts in situ. International regulation Minimally manipulated cells are allowed to be an object of manufacture and homologous transplantation in USA and European Countries. The criteria of "minimal manipulation" are variative in different countries. European regulations, according to the Reflection Paper on the classification of advanced therapy medicinal products of the European Medicines Agency, define "minimal manipulation" as the procedure that does not change biological characteristics and functions of cells. In particular, enzymatic digestion of biomaterial is prohibited, when cell-to-cell contacts are dissociated. According to the US regulations (US 21 Code of Federal Regulations § 1271.3(f)(1), Section 361) human cells and tissues and tissue-based products (section 361 HCT/Ps), “minimal manipulation” is a processing that does not alter the original relevant characteristics of the structural tissue relating to the tissue’s utility for reconstruction, repair, or replacement. Russian regulations provide no specific definition for “minimally manipulated” cells. However, it follows from the content of the Order of Russian Ministry of Health No. 1158n “On amending the list of transplantation objects”. According to the Order, cells obtained from the biomaterial by its grinding, homogenization, enzymatic treatment, removal of unwanted components or by selective collection of cells, could be considered as “minimally manipulated”. Minimally manipulated cells are allowed to be an object of transplantation, when they do not contain any other substances except for water, crystalloids, sterilizing, storage, and (or) specific preserving agents. See also Advanced Therapy Medicinal Product References Biomedicine Regenerative biomedicine
Minimally manipulated cells
[ "Biology" ]
430
[ "Biomedicine" ]
68,047,828
https://en.wikipedia.org/wiki/Sodium%20oligomannate
Sodium oligomannate (development code GV-971) is a mixture of oligosaccharides isolated from the marine algae Ecklonia kurome that is used in China as a treatment for Alzheimer's disease (AD). It was conditionally approved in China by the National Medical Products Administration in 2019 for mild to moderate AD to improve cognitive function. However, the clinical data supporting its potential benefits have been received skeptically elsewhere and are considered insufficient for approval in other countries. Therefore, it is still undergoing Phase III clinical trials necessary for regulatory approval in the United States and Europe. In 2022, Green Valley Pharmaceuticals, the company conducting Phase III clinical trials for the purpose of obtaining FDA approval in the United States, ended the trials early and suspended further development of the drug. The mechanism by which sodium oligomannate may function is unclear and several possibilities have been proposed, including amyloid beta disaggregation, mediation of inflammatory responses to amyloid plaques, protein binding inside neurons, and alteration of intestinal bacteria. References Polysaccharides Treatment of Alzheimer's disease
Sodium oligomannate
[ "Chemistry" ]
228
[ "Carbohydrates", "Polysaccharides" ]
68,049,282
https://en.wikipedia.org/wiki/Hydroxetamine
Hydroxetamine (3'-hydroxy-2-oxo-PCE, O-desmethylmethoxetamine, HXE) is a recreational designer drug from the arylcyclohexylamine family, with dissociative effects. It is known as an active metabolite of the dissociative designer drug methoxetamine, but has also been sold in its own right since late 2019. See also 3-HO-PCP 4-Keto-PCP Desmetramadol Deoxymethoxetamine Fluorexetamine References Arylcyclohexylamines Designer drugs Dissociative drugs 3-Hydroxyphenyl compounds Ketones Secondary amines
Hydroxetamine
[ "Chemistry" ]
158
[ "Ketones", "Functional groups" ]
68,051,036
https://en.wikipedia.org/wiki/Deoxymethoxetamine
Deoxymethoxetamine (3'-methyl-2-oxo-PCE, DMXE, 3D-MXE) is a recreational designer drug from the arylcyclohexylamine family, with dissociative effects. It is an analogue of methoxetamine where the 3-methoxy group has been replaced by methyl. It has been sold online since around October 2020, and was first definitively identified by a forensic laboratory in Denmark in February 2021. See also 3-Methyl-PCP 3-Methyl-PCPy Hydroxetamine Fluorexetamine Methoxetamine Methoxieticyclidine MXiPr References Arylcyclohexylamines Designer drugs Dissociative drugs Secondary amines Ketones
Deoxymethoxetamine
[ "Chemistry" ]
167
[ "Ketones", "Functional groups" ]
68,057,532
https://en.wikipedia.org/wiki/Indium%28III%29%20iodide
Indium(III) iodide or indium triiodide is a chemical compound of indium and iodine with the formula InI3. Preparation Indium(III) iodide can be obtained by reacting indium with iodine vapor: Indium(III) iodide can also be obtained by evaporation of a solution of indium in HI. Properties Indium(III) iodide is a pale yellow, very hygroscopic monoclinic solid (space group P21/c (space group no. 14), a = 9.837 Å, b = 6.102 Å, c = 12.195 Å, β = 107.69°), which melts at 210 °C to form a dark brown liquid and is highly soluble in water. Its crystals consist of dimeric molecules. The yellow β form slowly converts to the red α form. In the presence of water vapor, the compound reacts with oxygen at 245 °C to form indium(III) oxide iodide. Distinct yellow and red forms are known. The red form undergoes a transition to the yellow at 57 °C. The structure of the red form has not been determined by X-ray crystallography; however, spectroscopic evidence indicates that indium may be six coordinate. The yellow form consists of In2I6 with 4 coordinate indium centres. References Iodides Indium compounds Metal halides
Indium(III) iodide
[ "Chemistry" ]
296
[ "Inorganic compounds", "Metal halides", "Salts" ]
78,220,408
https://en.wikipedia.org/wiki/Metabolic%20theory%20of%20cancer
The metabolic theory of cancer is the hypothesis that the primary cause of cancer is changes in cellular metabolism. The theory is strongly linked to the idea that diet can be used to prevent or treat many or most types of cancer. It is widely accepted that changes in cellular metabolism—specifically, an increased reliance on glucose for energy, and up-regulation of anabolic processes—do occur in many types of cancer cells. However, the idea that cancer can be controlled mostly or entirely by diet does not have broad acceptance in the medical field. Key principles of the metabolic theory of cancer The metabolic theory of cancer is built upon several well-known biochemical differences between normal and rapidly-proliferating cells, including: The Warburg effect: In the 1920s, Otto Warburg observed that cancer cells prefer to generate energy through anaerobic fermentation, even when oxygen is plentiful. This is different from normal cells, which use oxygen-dependent oxidative phosphorylation for more efficient energy production. As a result, cancer cells rely heavily on glucose (sugar) for energy, and produce significant amounts of lactate. Mitochondrial dysfunction: Proponents of the metabolic theory argue that mitochondrial dysfunction plays a crucial role in cancer. Mitochondria are the cell’s energy factories and play a role in regulating cell death (apoptosis). Alterations in mitochondrial morphology are evident in several types of cancer cells. Metabolic flexibility and glutamine dependence: Cancer cells often rely on glutamine as an alternative energy source and as a building block for growth, especially when glucose is limited. This metabolic flexibility allows cancer cells to adapt and thrive under varying conditions, making them more resistant to standard treatments like chemotherapy. Therapeutic implications Dietary approaches: Low-carbohydrate or ketogenic diets are suggested as potentially helpful because they limit glucose availability to cancer cells, thereby "starving" them. These diets force the body to produce ketones, which cancer cells cannot use. Targeting metabolism in treatment: Drugs that disrupt cancer cells' metabolic processes, particularly those targeting glucose or glutamine metabolism, are being explored as cancer treatments. Controversy and research The metabolic theory is still under investigation and remains controversial. While there is evidence supporting the role of altered metabolism in cancer, the genetic mutation model also has significant support. Many researchers are now examining cancer as a complex interplay of genetic mutations and metabolic dysregulation, rather than a purely genetic or purely metabolic disease. See also Cancer Tumor metabolome References Aging-associated diseases Cancer
Metabolic theory of cancer
[ "Biology" ]
515
[ "Senescence", "Aging-associated diseases" ]
78,220,652
https://en.wikipedia.org/wiki/RG7713
RG7713, or RG-7713, also known as RO5028442, is a small-molecule vasopression V1A receptor antagonist which is or was under development for the treatment of pervasive developmental disorders or autism. It is administered by intravenous injection. The drug is centrally penetrant and is devoid of antagonism of the vasopressin V2 and oxytocin receptors. Clinical studies found induction of subtle improvements but also subtle deficits in social communication surrogates with RG7713 in adult men with high-functioning autism. A 2024 meta-analysis of vasopressin V1A receptor antagonists including RG7713 for autism found that they may not be effective in the treatment of the core symptoms of autism. As of December 2018, no recent development has been reported. It reached phase 1 clinical trials. Balovaptan (RG7314) has been described as a follow-up compound of RG7713 and reached later-stage clinical trials but was found to be ineffective. See also ABT-436 Brezivaptan LIT-001 Nelivaptan SSR-149415 SRX246 References Abandoned drugs Benzofurans Chloroarenes Dimethylamino compounds Indoles Ketones Piperidines Spiro compounds Vasopressin receptor antagonists
RG7713
[ "Chemistry" ]
291
[ "Ketones", "Functional groups", "Drug safety", "Organic compounds", "Abandoned drugs", "Spiro compounds" ]
78,221,725
https://en.wikipedia.org/wiki/Rongying%20Jin
Rongying Jin is a Chinese and American physicist and materials scientist, the SmartState Endowed Chair for Experimental Nanoscale Physics and John M. Palms Bicentennial Chair in the Department of Physics and Astronomy at the University of South Carolina. Her research concerns the condensed matter physics of nanoscale materials, including superconductors and other quantum materials. Education and career After working as a research assistant in the Institute of Physics of the Chinese Academy of Sciences from 1988 to 1991, and visiting the University of Sussex and University of Cambridge in the UK, Jin completed a Ph.D. at ETH Zurich in 1997. She was a postdoctoral researcher at the Pennsylvania State University from 1997 to 2000, and a staff research scientist at the Oak Ridge National Laboratory from 2000 to 2009. She joined Louisiana State University as an associate professor of physics in 2009, and was promoted to full professor in 2012, before moving to her present position at the University of South Carolina. Recognition Jin was named as a Fellow of the American Physical Society (APS) in 2010, "for her significant contributions to materials physics, including science-driven materials development and pioneering studies of their underlying physics". She was named as a Fellow of the American Association for the Advancement of Science in 2012, "for her significant contributions to materials physics, including science-driven materials development and pioneering studies of their underlying physics". References External links SmartState Center for Experimental Nanoscale Physics Year of birth missing (living people) Living people Chinese physicists Chinese women physicists American physicists American women physicists Condensed matter physicists American materials scientists Women materials scientists and engineers ETH Zurich alumni Oak Ridge National Laboratory people Louisiana State University faculty University of South Carolina faculty Fellows of the American Physical Society Fellows of the American Association for the Advancement of Science
Rongying Jin
[ "Physics", "Materials_science", "Technology" ]
362
[ "Condensed matter physicists", "Materials scientists and engineers", "Condensed matter physics", "Women materials scientists and engineers", "Women in science and technology" ]
78,225,813
https://en.wikipedia.org/wiki/Eventually%20stable%20polynomial
A non-constant polynomial with coefficients in a field is said to be eventually stable if the number of irreducible factors of the -fold iteration of the polynomial is eventually constant as a function of . The terminology is due to R. Jones and A. Levy, who generalized the seminal notion of stability first introduced by R. Odoni. Definition Let be a field and be a non-constant polynomial. The polynomial is called stable or dynamically irreducible if, for every natural number , the -fold composition is irreducible over . A non-constant polynomial is called -stable if, for every natural number , the composition is irreducible over . The polynomial is called eventually stable if there exists a natural number such that is a product of -stable factors. Equivalently, is eventually stable if there exist natural numbers such that for every the polynomial decomposes in as a product of irreducible factors. Examples If is such that and are all non-squares in for every , then is stable. If is a finite field, the two conditions are equivalent. Let where is a field of characteristic not dividing . If there exists a discrete non-archimedean absolute value on such that , then is eventually stable. In particular, if and is not the reciprocal of an integer, then is eventually stable. Generalization to rational functions and arbitrary basepoints Let be a field and be a rational function of degree at least . Let . For every natural number , let for coprime . We say that the pair is eventually stable if there exist natural numbers such that for every the polynomial decomposes in as a prodcut of irreducible factors. If, in particular, , we say that the pair is stable. R. Jones and A. Levy proposed the following conjecture in 2017. Conjecture: Let be a field and be a rational function of degree at least . Let be a point that is not periodic for . If is a number field, then the pair is eventually stable. If is a function field and is not isotrivial, then is eventually stable. Several cases of the above conjecture have been proved by Jones and Levy, Hamblen et al., and DeMark et al. References Arithmetic dynamics Polynomials
Eventually stable polynomial
[ "Mathematics" ]
454
[ "Recreational mathematics", "Polynomials", "Arithmetic dynamics", "Number theory", "Algebra", "Dynamical systems" ]
69,432,773
https://en.wikipedia.org/wiki/Photoacoustic%20flow%20cytometry
Photoacoustic flow cytometry or PAFC is a biomedical imaging modality that utilizes photoacoustic imaging to perform flow cytometry. A flow of cells passes a photoacoustic system producing individual signal response. Each signal is counted to produce a quantitative evaluation of the input sample. Description Traditional flow cytometry uses cells in a laminar single file stream which then passes through a light source. Using various quantification of light scattering from the cells enables the system to quantify cellular size and complexity which can ultimately be returned in a quantification of cell composition within a sample. Photoacoustic flow cytometry operates on similar principles, but utilizes a photoacoustic signal to differentiate cellular patterns. Furthermore, flow cytometry provides great ex-vivo analysis, but due to its pure optical source its penetration depth is limited making in-vivo analysis limited. Alternatively, photoacoustics may provide an advantage over flow cytometry as it receives an acoustic signal rather than an optical one and can penetrate to greater depths as discussed further in operating principles and mathematics. The photoacoustic (PA) affect was discovered by Alexander Bell in 1880, occurs when a photon source is absorbed by an optically receptive substance producing an ultrasonic wave. The strength of the ultrasonic wave produced is a function of intensity of photon absorbed, and the innate properties of the substance illuminated. Each substance of interest absorbs photons at a specific wavelength, as a result only certain substances will innately produce a PA signal at a given wavelength. For example, hemoglobin and melanin are two common biological substances that produce strong PA signals in response to laser pulses around the 680 nm wavelength range. The absorption spectrum for the PA lies within the visible electromagnetic spectrum, making PA imaging non-radiative in nature. The specific absorption spectrum can both be a limitation and an exploitation of PA imaging (see more in applications). Systems commonly use an Nd:YAG (neodymium-doped yttrium aluminum garnet) or LED laser system that is pulsed to penetrate the biological tissue of interest. With each pulse that comes in contact with tissue, a PA signal in the form of an ultrasound wave is produced. This ultrasound wave propagates through the tissue until it reaches an ultrasound transducer to produce an a-line. The maximum amplitude of each a-line is extracted and its value is plotted on a time vs amplitude graph producing a cytometry graphic . Operating principles and mathematics Heat production Photoacoustic flow cytometry operates on the principle of the photoacoustic effect, whereby a laser in the visible spectrum produces a temperature rise and thus a thermal expansion. The thermal expansion equation with relation to laser intensity for a pulsating laser is described below. Where is the absorption coefficient of the focused equation, is the intensity of the laser, ⍵ is the frequency of the laser pulse, t is time. is described as the exponential expression of a sinusoidal function determined by Euler's formula. It is important to note that the penetration depth of the laser is limited by the diffusive regime, which is dependent on the attenuation through the tissue prior to biological target to be irradiated. Photoacoustic wave relationship Below then establishes the heat-pressure relationship for a photoacoustic signal. Where ∇ is the partial differential equation set with spatial relationship, is the speed of sound in the substance of interest, t is time, is pressure as a function of both time and space, β is the thermal expansion coefficient, is the specific heat capacity, and is the partial differential of the heat equation described above. The left side of the equation describes the pressure wave equation which is derived for modeling of an ultrasonic pressure wave equation. The right side of the equation determines the relationship of heat production to thermal expansion resulting in a pressure wave. Pressure wave solution While reality produces a three dimensional wave that propagates through the tissue, for the purposes of PAFC, the information needed only pertains to a one dimensional analysis. Below demonstrates the one dimensional solution due to a pulsed laser. Where is the absorption coefficient, is the thermal expansion coefficient, is the specific heat, F is the fluence of the laser, is the speed of sound in a given material and is the total energy derived from the laser pulse. It is important to note that for long durations of laser exposure, the wave equation becomes largely a function of laser intensity. For the purposes of analyzing PA signal, the laser pulse must be short in time to produce a signal that its value varies on the properties of the irradiated substance to differentiate the targets of interest. The differences in the pressure wave produced is the basis for signal separation in PAFC. Signal detection The pressure wave created is in the form of an ultrasound wave. The wave propagates through the material and is detected by an ultrasound transducer. The pressure is sensed via piezoelectric crystals which converts the pressure into a voltage change, i.e. , the amplitude of the signal is proportional to the value of the pressure at any given time. This voltage is plotted as a function of time and results in the formation of an a-line previously described. The temporal data is important for other types of photoacoustic imaging, but for the purposes of PAFC, the maximum amplitude within an a-line is extracted as the data point. For each laser pulse this maximum amplitude value is plotted vs time producing a flow cytometry signal tracing. Each line represents a laser pulse and its amplitude reflects the target irradiated. By selecting an amplitude range that is representative of a particular cell type, the signals can be counted and thus quantify cell types within a given sample. Figure 1 shows an animation of cells flowing and its representative PAFC signal tracing. Applications Bacteria Over two million bacterial infections occur annually in the United States. With antibiotic resistance increasing treatment of these infections is becoming increasingly difficult making correct antibiotic selection evermore important. Optimal antibiotic selection hangs on the ability to determine the offending bacteria. Traditionally, bacterial speciation is determined by culturing and PCR technologies. These technologies take at least 48 hours and sometimes more. Due to the prolonged timeframe for speciation, providers must select broad-spectrum antibiotics. PAFC can be used to detect bacteria in the blood for more timely antibiotic selection. The first step in detection with PAFC is marking the bacteria so they have a PA signal to detect. Typically, this is composed of a dye of and a method to attach the dye to the bacteria of interest. Although antibodies have been used in the past, bacteriophages have proven to be cheaper and more stable to produce. Multiple studies have shown the specificity of bacteriophage selection for a bacteria of interest, particularly MRSA, E. Coli, and Salmonella. Dyes vary, but most commonly utilized are gold nanoparticles, Indocyanine green (ICG), and red dye 81. The dyes produce an enhanced signal to enable more sensitive detection. The detection limits found in one study showed approximately 1 bacterial cell per 0.6 μm3. Specific dyes have been tested on animals for toxicity and have not resulted in any clear damage. Although human studies for the detection of bacteria in the blood have yet to be attempted, PAFC may play a role in future applications of bacterial detection. Malaria Malaria causes the deaths of 0.4 million people yearly worldwide. With current medications, early detection is key to preventing these deaths. Current methods include microscopic detection on blood film, serology, or PCR. Lab technicians may lack the experience or the technologies may be too expensive for certain facilities, inevitably missing the diagnosis. Furthermore current methods generally cannot detect malaria at parasites < 50 per microliter and needs 3–4 days post infection before detection can occur. Thus, there is a need for a more automated and sensitive detection method to improve patient outcomes. PAFC has proven detection limits at much lower than current methods. One study demonstrated a sensitivity of one parasite in 0.16 mL of circulating blood and thus can be detected on day 1–2 post inoculation. Furthermore, studies have demonstrated the feasibility of in vivo detection removing the possibility of missed diagnosis from damaged cells from blood extraction and in-vitro analysis. PAFC detects malaria via the surrogate marker hemozoin, a breakdown product produced by malaria in the merozoite stage. Hemozoin is a great photoacoustic target and responds strongly at wavelengths 671 nm and 820 nm range. Although background signals are produced by hemoglobin within RBCs, infected RBCs (iRBCs) with hemozoin produce a strong signal above hemoglobin at these wavelengths. In vitro methods utilize 50 micrometer capillary tubes with flow of 1 cm/s (in vitro) for detection. Conversely, Menyaev et al. demonstrated the detection of malaria in vivo. Detection was performed on superficial and deep vessels of mice. The superficial vessels provide a higher signal-to-noise ratio (SNR), but are less comparable to that of human vessels. Mice jugular veins and carotid arteries are similar in size to small human vessels which demonstrated higher artifacts due to blood pulsation and respiratory variation, but could be accounted for. Although PAFC provides a more sensitive detection limit, this method does come with some limitations. As mentioned previously the detection of hemozoin only occurs when the parasites are in the merozoite stage. This limits the detection time frame of the parasites vs detection in the trophozoite stage, but still provides earlier detection than current methods. Second, the vessel sizes tested thus far have only been in mice. Artifacts from deeper vessel analysis in humans may decrease the sensitivity of PAFC making the detection limit less useful than currently suggested. Although challenges still exist, PAFC may play a role in improving diagnosis of Malaria in humans. Circulating Tumor Cells (CTCs) Circulating tumor cells or CTCs are tumor cells that have broken off from their primary tumor and travel in the blood. These CTCs then seed distant sites resulting in metastases. Metastases cause 90% of cancer-related deaths and as such, detection of CTCs is critical to the prevention of mets. Studies have shown earlier detection of CTCs improves treatment and thus longer survival times. (15) Current detection methods include RT-PCR, flow cytometry, optical sensing, cell size filtration among others. These methods are limited due to the sampling size from extracted blood (~5–10 mL) which results in a CTC detection limit of ~ 10 CTC/mL. These methods take hours to days to get results which can result in delayed initiation of treatment. PAFC may play a role in the future detection of CTCs. In order to prevent the limitation of small volume sampling through the extraction of blood from the patient, PAFC utilizes an in vivo method to monitor a larger volume of blood (i.e. the entire volume). The study demonstrated monitoring a mouse aorta, they were able to visualize the entire mouse blood volume within 1 minute of detection. CTCs such as melanomas contain an intrinsic chromophore and do not require labeling for detection above the background of hemoglobin. Other tumor cells (such as cancerous squamous cells) can be tagged with nanoparticles to produce a larger PA signal over RBCs for their detection. These methods resulted in an improved detection limit of CTCs. De la Zerda et al. detected CTCs after only 4 days with inoculation of the cancer cells. Their detection limit was determined to be 1 CTC/mL, a 10 fold improvement in sensitivity. Furthermore, the nano-particle labeling was found to be non-toxic and only took 10 minutes to optimally tag the CTCs. This CTC detection can be used for metastatic screening, but also has therapeutic implications. During tumor resection or manipulation it has been determined that these manipulations release CTCs. PAFC can be used as a way to monitor for the release of these CTCs which then may require treatment in a systematic manner. Due to the non-linear thermoelastic effect from the laser on CTCs/Nanoparticles a higher laser fluence can cause the CTC to rupture without damaging the local RBCs. With reduction of CTCs, this could improve treatment with systemic methods or completely remove the need altogether. Although there is a large potential for application, there are still areas for improvement. First, PAFC is depth limited and has only been tested in superficial skin of humans which may pose a difficulty for more centrally located tumors such as lung or bowel. Second, although initial mouse models have shown efficacy with nanoparticle labeling, specific cancer type labeling and dye side-effects need to be more deeply studied to assure safety of this imaging modality. References Biomedical engineering
Photoacoustic flow cytometry
[ "Engineering", "Biology" ]
2,660
[ "Biological engineering", "Medical technology", "Biomedical engineering" ]
69,440,192
https://en.wikipedia.org/wiki/Weather%20of%202018
The following is a list of weather events that occurred in 2018. Summary by weather type Winter storms and cold waves A cold wave from late December 2017 persists into early January 2018. Between both years, 39 people die. Several records due to the cold are broken as a result, and the January 2018 North American blizzard is fueled. The blizzard results in 22 deaths. There is also $1.1 billion in damages. The storm was dubbed a historic bomb cyclone. Following a tranquil February, winter weather resumes in March. The March 1–3, 2018 nor'easter was the most destructive of those. Over 1.9 million people lose power,`with 9 dead and $2.25 billion in damage. another nor'easter a few days later causes 2 deaths and $525 million in damage. A final nor'easter rode up the East Coast two weeks later. It caused a tornado outbreak. This included an EF3 tornado that destroyed Jacksonville State University in Alabama. It also caused near-record spring snowfall along the Northeastern United States. Then, in April, a cold wave caused Iowa and Wisconsin to have their coldest April on record. In mid-November, a winter storm across the United States caused 11 deaths, one of the worst traffic jams in New York City, and 555 car crashes in New Jersey. A few weeks later, another blizzard kills 4 more people. After that, another winter storm caused 3 more deaths in North Carolina. Floods In late February 2018, the Ohio River had its highest crest since 1997. Six people die in the flooding. Droughts, heat waves, and wildfires Tornadoes 2018 was relatively quiet in terms of tornadoes, and for the first time in history, no EF4 or EF5 tornadoes touched down in the United States. However, the state of Connecticut saw a record number of tornadoes. The first major tornado outbreak of 2018 came on February 24. This tornado outbreak caused two deaths and 20 injuries from 30 tornadoes. This became the first tornado related death in the United States in 284 days, ending a record long streak. A few days later, a tornado outbreak strikes the United States from March 20 to 22nd. An EF3 tornado struck Jacksonville State University, causing $42 million and forcing 9,000 people to go without power. A month later, another tornado outbreak affects the United States in mid-April. An EF1 tornado in April in Louisiana caused one death. An EF2 tornado in North Carolina also caused a fatality, but an indirect one. A month later, another tornado outbreak produces an EF0 tornado in Newburgh, New York that results in a death. The storm itself causes 5 more deaths as a result of straight-line winds. Before that, on May 14, the storm also produced tornadoes across Kansas. On June 12, an F4 tornado touches down in Brazil, killing two. On July 10 another fatal EF2 touched down, this time in Minot, North Dakota. One newborn baby is killed and 28 others are injured. Nine days later, destructive tornadoes tore across Iowa, causing $320 million in damage, and 37 injuries. It also fuels the Table Rock Lake duck boat accident, which kills 17 and injures 7 in Missouri. On August 3, an EF4 touched down in Manitoba, becoming North America's only violent tornado of the year, and killing one person. The remnants of Hurricane Florence spawned a fatal tornado in Virginia. The tornado outbreak as a whole spawned 37 other tornadoes. Just a few days later, the 2018 United States–Canada tornado outbreak causes damage in the Midwestern United States and especially in Ontario and Quebec. Before crossing into Canada, Minnesota had its third most prolific tornado day on record. 300,000 customers in the Ottawa area lost power. The tornadoes cause $295 million in damage and injure 31. In late October and early November, another tornado outbreak occurs, spawning 61 tornadoes. There is one indirect death due to an EF1 in Mississippi, and two direct deaths due to an EF1 tornado in Maryland. Just a few days later, another fatal tornado touches down in Tennessee. A tornado outbreak then started at the end of the month and continued into December, which spawned 49 tornadoes, including an EF3 in Illinois that injured 22. Another tornado death occurs due to an EF1 in Missouri. Two weeks later, the 2018 Port Orchard tornado touches down in Port Orchard, Washington. The tornado caused $1.81 million in damage. Finally, on December 31, a child dies in a tornado in Indonesia. Tropical cyclones As the year began, a tropical depression was moving across the Philippines, and Cyclone Ava was developing northeast of Madagascar. Ava caused at least 51 deaths and US$195 million in damage, and was followed by 13 additional tropical cyclones in the south-west Indian Ocean. In the Australian region, there were 27 tropical cyclones, including Cyclone Marcus, a powerful cyclone that caused US$75 million in damage in Western Australia. In the South Pacific Ocean, there were 15 tropical cyclones during the year, including Cyclone Gita, the most intense tropical cyclone to impact Tonga since reliable records began. In the northern hemisphere, the western Pacific Ocean was active, with 44 tropical cyclones. The strongest typhoons were Kong-rey and Yutu, which both had 10 minute sustained winds of 215 km/h (130 mph) and a minimum pressure of . In October, Yutu struck Tinian in the Northern Marianas Islands at peak intensity, making it the strongest storm on record to hit the island chain. When Typhoon Jebi struck Japan in September, insured damage totaled around US$15 billion, making it the country's costliest ever typhoon. In July, Tropical Storm Son-Tinh killed more than 200 people when it moved through the Philippines, China, and Vietnam, mostly related to a dam collapse in Laos. In December, Tropical Depression Usman moved through the Philippines, killing 156 people and leaving ₱5.41 billion (US$103 million) in damage. In the north Indian Ocean, there were 14 tropical cyclones, several of which affected land. In May, Cyclone Sagar killed 79 people when it struck Somaliland in the Horn of Africa. Cyclone Mekunu caused US$1.5 billion in damage and 31 deaths when it struck Oman. Cyclone Titli killed 85 people when it struck southeastern India in October. The north-east Pacific Ocean was active, with three Category 5 hurricanes on the Saffir-Simpson scale – Lane, Walaka, and Willa. Lane in August was the wettest on record in Hawaii, with peak rainfall accumulations of 58 inches (1,473 mm) causing US$250 million in damage. In October, Walaka affected the Northwestern Hawaiian Islands, and Willa struck southwestern Mexico, causing nine deaths and US$825 million in damage. The Atlantic Ocean featured 16 tropical cyclones, including Hurricane Michael in October, one of only four Category 5 hurricanes to hit the United States at that intensity. Michael struck the Florida panhandle and caused US$25.5 billion in damage as well as 74 deaths. In September, Hurricane Florence caused widespread flooding after setting state precipitation records in North and South Carolina, resulting in US$24 billion in damage and 52 fatalities. In addition to the officially tracked storms, there was a Mediterranean tropical-like cyclone named Cyclone Zorbas, which struck Greece. Timeline This is a timeline of weather events during 2018. Please note that entries might cross between months, however, all entries are listed by the month it started, except for the December 2017–January 2018 North American cold wave, which was ongoing when 2018 began. January December 23, 2017 – January 19, 2018 – A cold wave caused damaging low temperatures across eastern North America. The cold wave also caused Tallahassee, Florida to receive trace amounts of frozen precipitation for the first time in more than 30 years. December 29, 2017 – January 4, 2018 – Tropical Storm Bolaven forms east of Palau. The storm struck the southern portion of the Philippines, where it killed 4 people in total and left ₱554.7 million (US$ million) of damage, before dissipating off the South Central Coast of Vietnam. January 2–6 – A cyclonic blizzard across North America killed 22 people, caused at least 300,000 power outages, and caused $1.1 billion (2018 USD) in damage across Cuba, The Bahamas, Bermuda, the Southeastern United States, the Northeastern United States, New England, and Atlantic Canada. The storm received various unofficial names, such as Winter Storm Grayson, Blizzard of 2018 and Storm Brody. The storm was also dubbed a "historic bomb cyclone". January 8- A severe cold wave hit Bangladesh and Bangladesh register lowest ever recorded temperature in independent Bangladesh history in tetulia upzila in panchagarh district in Rangpur division a temperature about 2.60°C on 8:38 am in local time. January 9 – A series of mudflows in Southern California killed 23 people, injured 163 others, and caused $207 million (2018 USD) in damage. January 11–24 – Cyclone Berguitta kills two people with one missing and caused over US$107 million in damage across Mauritius and Réunion. January 14–16 – Tropical Depression 04 kills 11 people and caused $5.1 million (2018 USD) in damage across Madagascar and Mozambique. January 28–30 - Cyclone Fehi caused extensive damage as an extratropical cyclone in Western New Zealand. Insurance loss were amounted at NZ$38.5 million (US$28.5 million). February February 3–22 – Cyclone Gita kills 3 people (One presumed) and caused at least $252.8 million (2018 USD) in damage across Vanuatu, Fiji, Wallis and Futuna, Samoa, American Samoa, Cook Islands, Niue, Tonga, New Caledonia, Queensland, and New Zealand. Cyclone Gita was the most intense tropical cyclone to impact Tonga since reliable records began. February 8–16 – Tropical Storm Sanba, known in the Philippines as Tropical Storm Basyang, kills 15 people and caused $3.23 million (2018 USD) in damage across the Caroline Islands and the Philippines. February 11–21 – Cyclone Kelvin caused $25 million (2018 USD) in damage across the Northern Territory, Western Australia and South Australia. February 24 – A tornado outbreak causes 2 deaths and 20 injuries from 30 tornadoes. March March 1–5 – A nor'easter bomb cyclone and winter storm, unofficially named Winter Storm Riley by The Weather Channel, killed nine people, caused over 1.9 million power outages, and caused $2.25 billion (2018 United States Dollar) in damage across the Northeastern United States and Canada. March 2–9 – A nor'easter and blizzard, unofficially named Winter Storm Quinn by The Weather Channel, killed two people, caused over 1 million power outages, and caused $525 million (2018 USD) in damage across the Northeastern United States and Canada. March 2–10 – Cyclone Dumazile kills two people and caused damage across Madagascar and Réunion. March 3–13 – Cyclone Hola kills three people and caused damage across Vanuatu, New Caledonia, and New Zealand. March 14–22 – Tropical Storm Eliakim kills 21 people and caused $3.21 million (2018 USD) in damage across Madagascar, Réunion, Mayotte, Tromelin Island, Mauritius, and Kenya. March 14–27 – Cyclone Marcus caused $75 million (2018 USD) in damage across Western Australia and Australia's Northern Territory. Cyclone Marcus was also considered the worst cyclone to hit Darwin since 1974. March 18–24 – A nor'easter winter storm and tornado outbreak, dubbed by the media as the Four'easter and unofficially named Winter Storm Toby and Nor'easter 4 by The Weather Channel killed four people, caused over 100,000 power outages, and caused $900 million (2018 USD) in damage across the United States. The storm spawned 20 tornadoes, with one being an EF3 that impacted Jacksonville State University. The tornadoes injured 7 people. March 29 – April 2 – Cyclone Josie kills six people and caused $10 million (2018 USD) in damage across Vanuatu, Fiji, and Tonga. April April 13–15 – A tornado outbreak combined with a blizzard (Winter Storm Xanto) kills four people (1 tornadic (+1 indirect and 3 winter storm), injured 29 others, and caused $925 million (2018 USD) in damage across the United States and Eastern Canada with a total of 73 confirmed tornadoes. April 22–26 – Cyclone Fakir kills two people and caused over 17.7 million (2018 USD) in damage across Mauritius and Réunion. May May 14–15 – A tornado outbreak in the Great Plains and Northeastern United States kills six people (1 tornadic and 5 straight-line winds) from 24 tornadoes. May 16–20 – Cyclone Sagar kills 79 people and caused $30 million (2018 USD) in damage across Yemen, Somalia, Somaliland, Djibouti, and Ethiopia. Cyclone Sagar was the strongest tropical cyclone to make landfall in Somalia and Somaliland in recorded history until 2020. May 21–27 – Cyclone Mekunu kills 31 people and caused $1.5 billion (2018 USD) in damage across Yemen, Oman, and Saudi Arabia. Cyclone Mekunu was the strongest storm to strike Oman's Dhofar Governorate since 1959. May 25–June 1 – Tropical Storm Alberto kills 18 people and caused $125 million (2018 USD) in damage across the Yucatán Peninsula, Cuba, the Eastern United States, and Canada. May 27 – A flood in Maryland killed one person, which prompted Governor Larry Hogan declared a state of emergency. June June 2–11 – Tropical Storm Ewiniar kills 14 people and caused $749 million (2018 USD) in damage across the Philippines, Vietnam, South China, Taiwan, and the Ryukyu Islands. June 3–13 – Tropical Storm Maliksi kills two people and caused damage across the Philippines and Japan. June 9–16 – Hurricane Bud kills two people and caused $167,000 (2018 USD) in damage across the Baja California Peninsula, Northwestern Mexico, Southwestern United States, and Wyoming. June 12 – A violent F4 tornado in Brazil killed two people and caused damage across Rio Grande do Sul, Brazil. June 14–19 – Tropical Storm Carlotta kills three people and caused $7.6 million (2018 USD) in damage across Central and Southern Mexico. June 28 – An EF0 anticyclonic tornado touches down in Montana. June 28 – July 5 – Typhoon Prapiroon kills four people and caused $10.1 million (2018 USD) in damage across Japan and the Korean Peninsula. July July 3–12 – Typhoon Maria, known in the Philippines as Typhoon Gardo, kills two people and caused $628 million (2018 USD) in damage across the Mariana Islands, the Ryukyu Islands, Taiwan, and China. July 4–17 – Hurricane Beryl caused over $1 million (2018 USD) in damage and caused 47,000 power outages across the Caribbean, the United States Virgin Islands, Puerto Rico, Hispaniola, Lucayan Archipelago, Bermuda, and Atlantic Canada. July 6–17 – Hurricane Chris kills one person and caused damage across the East Coast of the United States, Bermuda, Atlantic Canada, and Iceland. July 10 – An EF2 tornado in North Dakota kills one person and injured 28 others. July 15–24 – Tropical Storm Son-Tinh, known in the Philippines as Tropical Storm Henry, kills 173 people with over 1,100 missing and caused $323 million (2018 USD) in damage across the Philippines, South China, Vietnam, Laos, Thailand, and Myanmar. July 17–26 – Tropical Storm Ampil, known in the Philippines as Severe Tropical Storm Inday, kills one person and caused $241 million (2018 USD) in damage across the Ryukyu Islands and East and Northeast China. July 20–23 – Tropical Depression Josie kills 16 people and caused $87.4 million (2018 USD) in damage across the Philippines and Taiwan. July 19–20 - A tornado outbreak, mainly in Iowa, causes 37 injuries and $320 million from 31 tornadoes. It also caused the Table Rock Lake duck boat accident to sink, which killed 17 people and injured 7 others. July 23 – August 4 – Typhoon Jongdari caused $1.46 billion (2018 USD) in damage across Japan and East China after becoming the fourth tropical cyclone since 1951 to approach Honshu on a westward trajectory. July 31 – August 16 – Hurricane Hector caused damage across Hawaii and Johnston Atoll. August August 2–10 – Typhoon Shanshan caused $866,000 (2018 USD) in damage across the Mariana Islands and Japan. August 3 – A violent EF4 tornado in Manitoba, Canada kills one person and injured two others. The tornado was the first (and only) violent tornado in North America during 2018, destroying parts of Alonsa, Manitoba Canada. It also causes $2 million in damage. August 4–7 – Tropical Storm Ileana killed eight people and caused damage across Western Mexico. August 6–16 – Tropical Storm Yagi killed seven people and caused $365 million (2018 USD) in damage across China, Taiwan, the Philippines, and Korea. August 31–September 18 – Hurricane Florence killed 54 people (24 direct and 30 indirect) and caused $24.23 billion (2018 USD) in damage across West Africa, Cape Verde, Bermuda, the East Coast of the United States (especially the Carolinas), and Atlantic Canada. September September 3–8 – Tropical Storm Gordon killed four people (3 direct and 1 indirect) and caused over $200 million (2018 USD) in damage across Hispaniola, Cuba, The Bahamas, South Florida, the Florida Keys, the Gulf Coast of the United States, Arkansas, Missouri, the United States East Coast, and Southern Ontario. September 17 – An EF2 tornado in Virginia associated with Hurricane Florence kills one person, injured 16 people, and caused significant damage in the Midlothian, Virginia area. September 20–21 – A tornado outbreak in the United States and Canada kills one person (non-tornadic) and injured 31 others from 37 tornadoes across Ohio, Iowa, Minnesota, Wisconsin, eastern Ontario, and southern Quebec. The storm caused over 300,000 power outages in Ontario, Canada. October October 7–16 – Hurricane Michael kills 74 people (31 direct and 43 indirect) and caused $25.5 billion (2018 USD) in damage across Central America, Yucatán Peninsula, Cayman Islands, Cuba, the Southeastern United States (especially the Florida Panhandle and Georgia), the Eastern United States, Eastern Canada, and the Iberian Peninsula. October 31 - November 2 - A tornado outbreak causes two direct deaths and another indirect one from 61 tornadoes. November November 6 – An EF2 tornado in Tennessee kills one person and injured two others. November 8–25 - The Camp Fire, across northern California, killed 85 people with one missing, injured 17 others, and caused $16.65 billion (2018 USD) in damage, becoming the costliest wildfire on record. November 23 - A monthly low temperature record is set in two cities in New York. Syracuse saw a low of and Binghamton saw a record low of . Several other cities saw daily record lows set. Bridgeport, Connecticut also set a record for coldest November day with a low of . November 30 – A high-end F1 tornado in Brazil kills two people. November 30 – December 2 – A tornado outbreak across the United States kills one person and injured 32 others from 49 tornadoes. This outbreak was the largest December tornado event on record in Illinois history. December December 18 – A rare EF2 tornado hits Port Orchard, Washington and caused $1.81 million (2018 USD) in damage and became the strongest tornado in Washington since 1986. December 31 – A tornado in Indonesia killed one person. See also Weather of 2020 References Weather by year 2018 meteorology 2018-related lists
Weather of 2018
[ "Physics" ]
4,095
[ "Weather", "Physical phenomena", "Weather by year" ]
69,441,969
https://en.wikipedia.org/wiki/Xenon%20gas%20MRI
Hyperpolarized 129Xe gas magnetic resonance imaging (MRI) is a medical imaging technique used to visualize the anatomy and physiology of body regions that are difficult to image with standard proton MRI. In particular, the lung, which lacks substantial density of protons, is particularly useful to be visualized with 129Xe gas MRI. This technique has promise as an early-detection technology for chronic lung diseases and imaging technique for processes and structures reliant on dissolved gases. 129Xe is a stable, naturally occurring isotope of xenon with 26.44% isotope abundance. It is one of two Xe isotopes, along with 131Xe, that has non-zero spin, which allows for magnetic resonance. 129Xe is used for MRI because its large electron cloud permits hyperpolarization and a wide range of chemical shifts. The hyperpolarization creates a large signal intensity, and the wide range of chemical shifts allows for identifying when the 129Xe associates with molecules like hemoglobin. 129Xe is preferred over 131Xe for MRI because 129Xe has spin 1/2 (compared to 3/2 for 131Xe), a longer T1, and 3.4 times larger gyromagnetic ratio (11.78 MHz/T). Uses Medical uses Xenon Xe 129 hyperpolarized, sold under the brand name Xenoview, is a hyperpolarized contrast agent indicated for use with magnetic resonance imaging (MRI) for evaluation of lung ventilation, and approved for people aged twelve years of age and older. It was approved for medical use in the US in December 2022. The most common side effects include mouth and throat pain, headache, and dizziness. The US Food and Drug Administration (FDA) considers it to be a first-in-class medication. The FDA approved Xenoview based on evidence from two clinical trials in 83 participants with various lung disorders who were being evaluated for possible lung resection or lung transplantation. The trials were conducted at five sites in the United States and assessed both efficacy and safety of Xenoview. Xenoview was evaluated in two clinical trials of 83 adults with pulmonary disorders who each underwent sequential lung ventilation imaging with Xenoview with MRI and an approved comparator, Xe 133 scintigraphy. In study 1, participants were imaged to help plan possible lung resection. To determine the benefit of Xenoview, estimates of the percentage of lung ventilation predicted to remain after surgery made with Xenoview with MRI and comparator imaging were evaluated for equivalence. In study 2, participants were imaged to help plan possible lung transplantation. To determine the benefit of Xenoview, estimates of the percentage of lung ventilation contributed by the right lung made with Xenoview with MRI and comparator imaging were evaluated for equivalence. History Hyperpolarized 129Xe is achieved through spin-exchange optical pumping, a technique developed by Grover et al. in 1978 and improved by Happer et al. in 1984. Quantification of 129Xe polarization was first described in 1982 by Bhaskar et al. The use of hyperpolarized 129Xe gas in MRI ex-vivo was first described by Albert et al. in 1994 using excised rat lungs. The first in-vivo human studies with 129Xe MRI were published by Mugler et al. in 1997. 129Xe MRI has largely begun to replace 3He gas MRI, a very similar technology that uses hyperpolarized 3He molecules instead of 129Xe. Grossman et al. began human clinical trials for 3He MRI in 1996. 3He was originally touted as the better gas for hyperpolarized gas MRI because it is more polarizable and has no effects on the body. However, 3He is mostly produced by the beta decay of tritium (3H), which is a product of nuclear warhead production. Additionally, 3He is widely used by the U.S. military to detect smuggled plutonium. These combination of increasing scarcity and increasing demand have combined to make 3He highly expensive, up to more than $1000 per liter. Safety 129Xe is an inert, non-radioactive, non-toxic, and non-teratogenic molecule that has shown no significant adverse health effects when inhaled for MR imaging. One potential area of concern is 129Xe's anesthetic properties when a large volume is inhaled. Xenon shows blood and tissue solubility that allows it to diffuse through the lung membrane and affect the nervous system. The minimum alveolar concentration for 50% of motor response to be prevented (MAC) is 0.71, which is not reached during imaging. Further studies have shown that it provides good circulatory stability when dissolved in blood and does not affect body temperature. Hyperpolarization When applying an external magnetic field to gas, half of the nuclear spins of the gas atoms point towards the direction of the magnetic field whereas the other half point in the opposite direction. It is slightly more energetically favorable to be aligned with the magnetic field, meaning that one of the spin states is in slight excess of the other. This excess means that the two spin-states do not completely cancel each other out, creating a magnetic signal which can be observed with MRI. However, for traditional 1H MRI, only about 4 ppm of the spin states do not cancel, so the signal is not particularly strong. This means that only regions with high densities of protons, like muscle tissue can be seen. Hyperpolarization is a means of flipping more of the atoms to have the same spin state so that less of the spin states cancel each other. In the case of 129Xe, this leads to a 104-105 improvement in signal strength. Hyperpolarization of 129Xe is usually performed using spin-exchange optical pumping (SEOP) using circularly polarized light to add angular momentum of the atoms. However, the polarized light cannot directly transfer angular momentum to the gas nuclei, thus, an alkali metal atom is used as an intermediary. Rubidium is often used to accomplish this, where the polarized light is tuned to provide exactly the necessary energy to excite rubidium's valence electron. This process is called optical pumping. In the next step, spin exchange, gas nuclei are introduced to the system and collide with the rubidium. They receive angular momentum in the collisions with rubidium valence electrons, which, by conservation of angular momentum, is in the same direction as the rubidium. Therefore, 129Xe becomes hyperpolarized because there is a large excess of one spin state compared to the other. After this, the 129Xe is extracted, the rubidium is polarized again, and the cycle continues. Required modifications to conventional MRI Traditional MR scanners need to be modified to detect 129Xe, as 129Xe has a lower gyromagnetic ratio of 11.77 MHz/T compared to that of protons, 42.5 MHz/T. Thus, the Larmor frequency of 129Xe is much lower, which is difficult to detect with conventional narrow-band RF amplifiers set to proton's Larmor frequency. Therefore, a broad-band RF amplifier, for both excitation and receiving, is required. Additionally, the pulse sequence must also accommodate the difference in thermally-polarized protons and polarized 129Xe. In proton MRI, a typical pulse sequence would involve a 90° flip then a subsequent T1 longitudinal relaxation to the external magnetic field. T1 relaxation in hyperpolarized gas involves the decay of magnetization and not the return to an external magnetic field, as in thermally-polarized protons. Therefore, after a 90° flip, a hyperpolarized gas nuclei's longitudinal relaxation is negligible, making the longitudinal magnetization remain zero after the flip. As a result, traditional 90° and 180° RF pulses are not desirable. A low-angle RF pulse is therefore used to only remove a portion of the total available magnetization of the hyperpolarized 129Xe gas. This produces comparable longitudinal magnetization between protons and 129Xe gas. Furthermore, as an image needs to be acquired within a breath-hold, a fast pulse-sequences, or fast-gradient echos, are used to adequately sample the k-space. Applications Ventilation MRI After a patient inhales the hyperpolarized gas, the gas passes through the airways within the lungs. In a healthy lung, the gas is able to travel throughout the lungs. However, in a disease that obstructs airways, such as chronic obstructive pulmonary disease (COPD), asthma, and cystic fibrosis, the hyperpolarized gas is unable to reach certain regions within the lung. Thus, a spin-density weighted image will produce high signals from normal areas and low signals from diseased regions. 3He was originally used for this type of image, but recently there has been a shift towards to 129Xe due to its availability and cheaper price. Hyperpolarized 3He has historically produced superior images because it is easier to hyperpolarize, but current technology has improved gas polarization of 129Xe to the point where the image quality is similar. Furthermore, 129Xe is more sensitive to obstructions as it is a larger atom than 3He. In addition, an increased inhaled volume of 129Xe results in a comparable SNR to that of 3He, up to 1 vs 0.1-0.3 liters. Diffusion MRI Diffusion MRI involves calculating the apparent diffusion coefficient (ADC) of the hyperpolarized gas. Diffusion-sensitizing gradients are applied to induce diffusion based attenuation to calculate the ADC. These gradients have an associated b-value, which represents the strength and duration of the gradients. At least 2 different b-value gradients are used to calculate the ADC. The ADC provides information regarding how the structure of the lung restricts the hyperpolarized gas diffusion. The value of the ADC increases in regions of increased space. For example, in healthy lungs, the ADC using 129Xe might be around 0.04 cm2/s whereas the ADC for 129Xe in an open space may be around 0.14 cm2/s. In emphysema, where alveolar structures enlarge, the gas is able to diffuse more freely, resulting in a higher ADC compared to normal regions providing information of disease areas. Ultimately, this is a novel imaging modality enabled by 129Xe MRI, and its use is being investigated for Chronic Obstructive Pulmonary Disease, Asthma, Cystic Fibrosis, Long-COVID-19, and other diseases. Partial pressure of oxygen The longitudinal relaxation (T1) of the hyperpolarized gas is inversely proportional to the concentration of the oxygen in the lung. The interaction between paramagnetic oxygen significantly decreases the relaxation time, which offers insights into the partial pressure of oxygen (pO2) within regions of the lung. Additionally, the ventilation to perfusion ratio can be calculated from these images. Most research has employed 3He, but improved technology has allowed for comparable results when using 129Xe. However, due to the uptake of 129Xe, its relaxation is much quicker than 3He resulting in higher apparent pO2 if left unaccounted. Research 129Xe gas MRI is being researched as a diagnostic test for respiratory diseases, such as COPD, asthma, and emphysema. Spirometry pulmonary function tests are used to determine the condition of lung function. However, this is a fairly basic, global assessment of lung function that does not provide specific information about the lung structure and physiology. For structural information, X-Ray CT is most commonly used, but it exposes the patient to high doses of ionizing radiation and it provides no functional information Conventional 1H MRI is not effective in the lung airspace because of the minimal proton density. 129Xe gas MRI provides detailed, specific information about lung structure and function that are not safely or efficiently obtainable by existing technologies. Visualizing non-lung tissues 129Xe gas is most commonly used to visualize the lung because it is a gas. However, small bubbles of xenon gas are capable of dissolving into the bloodstream at the alveoli. As these bubbles travel around the body, they can be used to gain insight into other regions of the body. 129Xe gas is capable of crossing the blood brain barrier, allowing novel study of brain perfusion.r Improving amount of hyperpolarization Using hyperpolarized gas to image the lungs is not particularly novel, as the use of 3He was established in the early 2000s. 3He was originally chosen because it was easily hyperpolarized to a very large degree, and therefore generated a very strong signal. Recently, improvements in hyperpolarization techniques have been able to generate more hyperpolarized 129Xe, enabling it to generate comparable images to 3He. References Magnetic resonance imaging MRI contrast agents
Xenon gas MRI
[ "Chemistry" ]
2,714
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
69,442,178
https://en.wikipedia.org/wiki/Cubic%20equations%20of%20state
Cubic equations of state are a specific class of thermodynamic models for modeling the pressure of a gas as a function of temperature and density and which can be rewritten as a cubic function of the molar volume. Equations of state are generally applied in the fields of physical chemistry and chemical engineering, particularly in the modeling of vapor–liquid equilibrium and chemical engineering process design. Van der Waals equation of state The van der Waals equation of state may be written as where is the absolute temperature, is the pressure, is the molar volume and is the universal gas constant. Note that , where is the volume, and , where is the number of moles, is the number of particles, and is the Avogadro constant. These definitions apply to all equations of state below as well. Proposed in 1873, the van der Waals equation of state was one of the first to perform markedly better than the ideal gas law. In this equation, usually is called the attraction parameter and the repulsion parameter (or the effective molecular volume). While the equation is definitely superior to the ideal gas law and does predict the formation of a liquid phase, the agreement with experimental data for vapor-liquid equilibria is limited. The van der Waals equation is commonly referenced in textbooks and papers for historical and other reasons, but since its development other equations of only slightly greater complexity have been since developed, many of which are far more accurate. The van der Waals equation may be considered as an ideal gas law which has been "improved" by the inclusion of two non-ideal contributions to the equation. Consider the van der Waals equation in the form as compared to the ideal gas equation The form of the van der Waals equation can be motivated as follows: Molecules are thought of as particles which occupy a finite volume. Thus the physical volume is not accessible to all molecules at any given moment, raising the pressure slightly compared to what would be expected for point particles. Thus (), an "effective" molar volume, is used instead of in the first term. While ideal gas molecules do not interact, real molecules will exhibit attractive van der Waals forces if they are sufficiently close together. The attractive forces, which are proportional to the density , tend to retard the collisions that molecules have with the container walls and lower the pressure. The number of collisions that are so affected is also proportional to the density. Thus, the pressure is lowered by an amount proportional to , or inversely proportional to the squared molar volume. The substance-specific constants and can be calculated from the critical properties and (noting that is the molar volume at the critical point and is the critical pressure) as: Expressions for written as functions of may also be obtained and are often used to parameterize the equation because the critical temperature and pressure are readily accessible to experiment. They are With the reduced state variables, i.e. , and , the reduced form of the van der Waals equation can be formulated: The benefit of this form is that for given and , the reduced volume of the liquid and gas can be calculated directly using Cardano's method for the reduced cubic form: For and , the system is in a state of vapor–liquid equilibrium. In that situation, the reduced cubic equation of state yields 3 solutions. The largest and the lowest solution are the gas and liquid reduced volume. In this situation, the Maxwell construction is sometimes used to model the pressure as a function of molar volume. The compressibility factor is often used to characterize non-ideal behavior. For the van der Waals equation in reduced form, this becomes At the critical point, . Redlich–Kwong equation of state Introduced in 1949, the Redlich–Kwong equation of state was considered to be a notable improvement to the van der Waals equation. It is still of interest primarily due to its relatively simple form. While superior to the van der Waals equation in some respects, it performs poorly with respect to the liquid phase and thus cannot be used for accurately calculating vapor–liquid equilibria. However, it can be used in conjunction with separate liquid-phase correlations for this purpose. The equation is given below, as are relationships between its parameters and the critical constants: Another, equivalent form of the Redlich–Kwong equation is the expression of the model's compressibility factor: The Redlich–Kwong equation is adequate for calculation of gas phase properties when the reduced pressure (defined in the previous section) is less than about one-half of the ratio of the temperature to the reduced temperature, The Redlich–Kwong equation is consistent with the theorem of corresponding states. When the equation expressed in reduced form, an identical equation is obtained for all gases: where is: In addition, the compressibility factor at the critical point is the same for every substance: This is an improvement over the van der Waals equation prediction of the critical compressibility factor, which is . Typical experimental values are (carbon dioxide), (water), and (nitrogen). Soave modification of Redlich–Kwong A modified form of the Redlich–Kwong equation was proposed by Soave. It takes the form where ω is the acentric factor for the species. The formulation for above is actually due to Graboski and Daubert. The original formulation from Soave is: for hydrogen: By substituting the variables in the reduced form and the compressibility factor at critical point we obtain thus leading to Thus, the Soave–Redlich–Kwong equation in reduced form only depends on ω and of the substance, contrary to both the VdW and RK equation which are consistent with the theorem of corresponding states and the reduced form is one for all substances: We can also write it in the polynomial form, with: In terms of the compressibility factor, we have: . This equation may have up to three roots. The maximal root of the cubic equation generally corresponds to a vapor state, while the minimal root is for a liquid state. This should be kept in mind when using cubic equations in calculations, e.g., of vapor-liquid equilibrium. In 1972 G. Soave replaced the term of the Redlich–Kwong equation with a function α(T,ω) involving the temperature and the acentric factor (the resulting equation is also known as the Soave–Redlich–Kwong equation of state; SRK EOS). The α function was devised to fit the vapor pressure data of hydrocarbons and the equation does fairly well for these materials. Note especially that this replacement changes the definition of a slightly, as the is now to the second power. Volume translation of Peneloux et al. (1982) The SRK EOS may be written as where where and other parts of the SRK EOS is defined in the SRK EOS section. A downside of the SRK EOS, and other cubic EOS, is that the liquid molar volume is significantly less accurate than the gas molar volume. Peneloux et alios (1982) proposed a simple correction for this by introducing a volume translation where is an additional fluid component parameter that translates the molar volume slightly. On the liquid branch of the EOS, a small change in molar volume corresponds to a large change in pressure. On the gas branch of the EOS, a small change in molar volume corresponds to a much smaller change in pressure than for the liquid branch. Thus, the perturbation of the molar gas volume is small. Unfortunately, there are two versions that occur in science and industry. In the first version only is translated, and the EOS becomes In the second version both and are translated, or the translation of is followed by a renaming of the composite parameter . This gives The c-parameter of a fluid mixture is calculated by The c-parameter of the individual fluid components in a petroleum gas and oil can be estimated by the correlation where the Rackett compressibility factor can be estimated by A nice feature with the volume translation method of Peneloux et al. (1982) is that it does not affect the vapor–liquid equilibrium calculations. This method of volume translation can also be applied to other cubic EOSs if the c-parameter correlation is adjusted to match the selected EOS. Peng–Robinson equation of state The Peng–Robinson equation of state (PR EOS) was developed in 1976 at The University of Alberta by Ding-Yu Peng and Donald Robinson in order to satisfy the following goals: The parameters should be expressible in terms of the critical properties and the acentric factor. The model should provide reasonable accuracy near the critical point, particularly for calculations of the compressibility factor and liquid density. The mixing rules should not employ more than a single binary interaction parameter, which should be independent of temperature, pressure, and composition. The equation should be applicable to all calculations of all fluid properties in natural gas processes. The equation is given as follows: In polynomial form: For the most part the Peng–Robinson equation exhibits performance similar to the Soave equation, although it is generally superior in predicting the liquid densities of many materials, especially nonpolar ones. Detailed performance of the original Peng-Robinson equation has been reported for density, thermal properties, and phase equilibria. Briefly, the original form exhibits deviations in vapor pressure and phase equilibria that are roughly three times as large as the updated implementations. The departure functions of the Peng–Robinson equation are given on a separate article. The analytic values of its characteristic constants are: Peng–Robinson–Stryjek–Vera equations of state PRSV1 A modification to the attraction term in the Peng–Robinson equation of state published by Stryjek and Vera in 1986 (PRSV) significantly improved the model's accuracy by introducing an adjustable pure component parameter and by modifying the polynomial fit of the acentric factor. The modification is: where is an adjustable pure component parameter. Stryjek and Vera published pure component parameters for many compounds of industrial interest in their original journal article. At reduced temperatures above 0.7, they recommend to set and simply use . For alcohols and water the value of may be used up to the critical temperature and set to zero at higher temperatures. PRSV2 A subsequent modification published in 1986 (PRSV2) further improved the model's accuracy by introducing two additional pure component parameters to the previous attraction term modification. The modification is: where , , and are adjustable pure component parameters. PRSV2 is particularly advantageous for VLE calculations. While PRSV1 does offer an advantage over the Peng–Robinson model for describing thermodynamic behavior, it is still not accurate enough, in general, for phase equilibrium calculations. The highly non-linear behavior of phase-equilibrium calculation methods tends to amplify what would otherwise be acceptably small errors. It is therefore recommended that PRSV2 be used for equilibrium calculations when applying these models to a design. However, once the equilibrium state has been determined, the phase specific thermodynamic values at equilibrium may be determined by one of several simpler models with a reasonable degree of accuracy. One thing to note is that in the PRSV equation, the parameter fit is done in a particular temperature range which is usually below the critical temperature. Above the critical temperature, the PRSV alpha function tends to diverge and become arbitrarily large instead of tending towards 0. Because of this, alternate equations for alpha should be employed above the critical point. This is especially important for systems containing hydrogen which is often found at temperatures far above its critical point. Several alternate formulations have been proposed. Some well known ones are by Twu et al. and by Mathias and Copeman. An extensive treatment of over 1700 compounds using the Twu method has been reported by Jaubert and coworkers. Detailed performance of the updated Peng-Robinson equation by Jaubert and coworkers has been reported for density, thermal properties, and phase equilibria. Briefly, the updated form exhibits deviations in vapor pressure and phase equilibria that are roughly a third as large as the original implementation. Peng–Robinson–Babalola-Susu equation of state (PRBS) Babalola and Susu modified the Peng–Robinson Equation of state as: The attractive force parameter ‘a’ was considered to be a constant with respect to pressure in the Peng–Robinson equation of state. The modification, in which parameter ‘a’ was treated as a variable with respect to pressure for multicomponent multi-phase high density reservoir systems was to improve accuracy in the prediction of properties of complex reservoir fluids for PVT modeling. The variation was represented with a linear equation where a1 and a2 were the slope and the intercept respectively of the straight line obtained when values of parameter ‘a’ are plotted against pressure. This modification increases the accuracy of the Peng–Robinson equation of state for heavier fluids particularly at high pressure ranges (>30MPa) and eliminates the need for tuning the original Peng–Robinson equation of state. Tunning was captured inherently during the modification of the Peng-Robinson Equation. The Peng-Robinson-Babalola-Susu (PRBS) Equation of State (EoS) was developed in 2005 and for about two decades now has been applied to numerous reservoir field data at varied temperature (T) and pressure (P) conditions and shown to rank among the few promising EoS for accurate prediction of reservoir fluid properties especially for more challenging ultra-deep reservoirs at High-Temperature High-Pressure (HTHP) conditions. These works have been published in reputable journals. While the widely used Peng-Robinson (PR) EoS  of 1976 can predict fluid properties of conventional reservoirs with good accuracy up to pressures of about 27 MPa (4,000 psi) but fail with pressure increase, the new Peng-Robinson-Babalola-Susu (PRBS) EoS can accurately model PVT behavior of ultra-deep reservoir complex fluid systems at very high pressures of up to 120 MPa (17,500 psi). Elliott–Suresh–Donohue equation of state The Elliott–Suresh–Donohue (ESD) equation of state was proposed in 1990. The equation corrects the inaccurate van der Waals repulsive term that is also applied in the Peng–Robinson EOS. The attractive term includes a contribution that relates to the second virial coefficient of square-well spheres, and also shares some features of the Twu temperature dependence. The EOS accounts for the effect of the shape of any molecule and can be directly extended to polymers with molecular parameters characterized in terms of solubility parameter and liquid volume instead of using critical properties (as shown here). The EOS itself was developed through comparisons with computer simulations and should capture the essential physics of size, shape, and hydrogen bonding as inferred from straight chain molecules (like n-alkanes). where: and is a "shape factor", with for spherical molecules. For non-spherical molecules, the following relation between the shape factor and the acentric factor is suggested: . The reduced number density is defined as , where is the characteristic size parameter [cm3/mol], and is the molar density [mol/cm3]. The characteristic size parameter is related to through where The shape parameter appearing in the attraction term and the term are given by (and is hence also equal to 1 for spherical molecules). where is the depth of the square-well potential and is given by , , and are constants in the equation of state: , , , The model can be extended to associating components and mixtures with non-associating components. Details are in the paper by J.R. Elliott, Jr. et al. (1990). Noting that = 1.900, can be rewritten in the SAFT form as: If preferred, the can be replaced by in SAFT notation and the ESD EOS can be written: In this form, SAFT's segmental perspective is evident and all the results of Michael Wertheim are directly applicable and relatively succinct. In SAFT's segmental perspective, each molecule is conceived as comprising m spherical segments floating in space with their own spherical interactions, but then corrected for bonding into a tangent sphere chain by the (m − 1) term. When m is not an integer, it is simply considered as an "effective" number of tangent sphere segments. Solving the equations in Wertheim's theory can be complicated, but simplifications can make their implementation less daunting. Briefly, a few extra steps are needed to compute given density and temperature. For example, when the number of hydrogen bonding donors is equal to the number of acceptors, the ESD equation becomes: where: is the Avogadro constant, and are stored input parameters representing the volume and energy of hydrogen bonding. Typically, and are stored. is the number of acceptors (equal to number of donors for this example). For example, = 1 for alcohols like methanol and ethanol. = 2 for water. = degree of polymerization for polyvinylphenol. So you use the density and temperature to calculate then use to calculate the other quantities. Technically, the ESD equation is no longer cubic when the association term is included, but no artifacts are introduced so there are only three roots in density. The extension to efficiently treat any number of electron acceptors (acids) and donors (bases), including mixtures of self-associating, cross-associating, and non-associating compounds, has been presented here. Detailed performance of the ESD equation has been reported for density, thermal properties, and phase equilibria. Briefly, the ESD equation exhibits deviations in vapor pressure and vapor-liquid equilibria that are roughly twice as large as the Peng-Robinson form as updated by Jaubert and coworkers, but deviations in liquid-liquid equilibria are roughly 40% smaller. Cubic-plus-association The cubic-plus-association (CPA) equation of state combines the Soave–Redlich–Kwong equation with the association term from SAFT based on Chapman's extensions and simplifications of a theory of associating molecules due to Michael Wertheim. The development of the equation began in 1995 as a research project that was funded by Shell, and published in 1996. In the association term is the mole fraction of molecules not bonded at site A. Cubic-plus-chain equation of state The cubic-plus-chain (CPC) equation of state hybridizes the classical cubic equation of state with the SAFT chain term. The addition of the chain term allows the model to be capable of capturing the physics of both short-chain and long-chain non-associating components ranging from alkanes to polymers. The CPC monomer term is not restricted to one classical cubic EOS form, instead many forms can be used within the same framework. The cubic-plus-chain (CPC) equation of state is written in terms of the reduced residual Helmholtz energy () as: where is the residual Helmholtz energy, is the chain length, "rep" and "att" are the monomer repulsive and attractive contributions of the cubic equation of state, respectively. The "chain" term accounts for the monomer beads bonding contribution from SAFT equation of state. Using Redlich−Kwong (RK) for the monomer term, CPC can be written as: where A is the molecular interaction energy parameter, B is the co-volume parameter, is the mole-average chain length, g(β) is the radial distribution function (RDF) evaluated at contact, and β is the reduced volume. The CPC model combines the simplicity and speed compared to other complex models used to model polymers. Sisco et al. applied the CPC equation of state to model different well-defined and polymer mixtures. They analyzed different factors including elevated pressure, temperature, solvent types, polydispersity, etc. The CPC model proved to be capable of modeling different systems by testing the results with experimental data. Alajmi et al. incorporate the short-range soft repulsion to the CPC framework to enhance vapor pressure and liquid density predictions. They provided a database for more than 50 components from different chemical families, including n-alkanes, alkenes, branched alkanes, cycloalkanes, benzene derivatives, gases, etc. This CPC version uses a temperature-dependent co-volume parameter based on perturbation theory to describe short-range soft repulsion between molecules. References Equations of state
Cubic equations of state
[ "Physics" ]
4,268
[ "Statistical mechanics", "Equations of state", "Equations of physics" ]
72,447,224
https://en.wikipedia.org/wiki/Ana%20Fl%C3%A1via%20Nogueira
Ana Flávia Nogueira (born 1973) is a Brazilian chemist who is a Full Professor at the State University of Campinas (Unicamp) since 2004. Her research considers nanostructured materials for solar energy conversion. She was elected to the Brazilian Academy of Science in 2022. Early life and education Ana Flávia Nogueira was born in Bragança Paulista in Brazil. She earned an undergraduate degree in chemistry at the University of São Paulo, then completed graduate research at the Unicamp, where she started investigating polymer electrolytes for dye-sensitized solar cells. Her doctoral research was the first in Brazil to make use of dye-sensitized solar cells. She spent part of her PhD at Imperial College London, where she worked with Prof. James Durrant on ultrafast laser spectroscopy. After earning her doctorate of inorganic chemistry in 2001, she returned to the Durrant laboratory in London. After leaving London, she worked with Prof. Niyazi Serdar Sarıçiftçi, then returned to Brazil in 2003, working in the group of Prof. Henrique E. Toma. In 2017 she was visiting professor at Stanford University, working in collaboration with Prof. Michael F. Toney in the use of synchrotron radiation applied to halide perovskite solar cells. Research and career Ana Flávia Nogueira established her own research group at Unicamp, where she investigates nanostructure materials for solar energy conversion based on oxides, chalcogenides and carbon structures. In the last years, her particular focus stands on perovskite solar cells and perovskite quantum dots for emssion applications. Her group is also investigating cheap and abundant photocatalysis for green hydrogen production. Alongside novel materials science, she patterned with the Brazilian Synchrotron Light Laboratory to develop in situ experiments with the use of synchrotron radiation to study the formation, crystallization and degradation of halide perovskite structures. Ana Flávia Nogueira was announced director of CINE, the Center for Innovation in New Energies, in 2020. In 2022, she was elected to the Brazilian Academy of Science and Fellow of the Royal Society of Chemistry. Ana Flávia is associate editor of the Journal of Materials Chemistry C and Materials Advances, both from Royal Society of Chemistry. Awards and honours 2020 Award for Brazilian Women in Chemistry and Related Sciences 2022 Elected to the Brazilian Academy of Science Selected publications In Situ and Operando Characterizations of Metal Halide Perovskite and Solar Cells: Insights from Lab-Sized Devices to Upscaling Processes, Rodrigo Szostak et al.;  Chemical Reviews 2023, 123, 6, 3160–3236. https://pubs.acs.org/doi/abs/10.1021/acs.chemrev.2c00382 Structural Origins of Light-Induced Phase Segregation in Organic-Inorganic Halide Perovskite Photovoltaic Materials Rachel E. Beal et al.  Matter v. 2, i. 1, p. 207-219, 2020.  https://doi.org/10.1016/j.matt.2019.11.001 Dye-Sensitized Nanocrystalline Solar Cells Employing a Polymer Electrolyte, A. F. Nogueira et al. Advanced Materials, Volume13, Issue11, June, 2001, Pages 826-830. https://doi.org/10.1002/1521-4095(200106)13:11<826::AID-ADMA826>3.0.CO;2-L References 1974 births Living people Brazilian women chemists Electrochemists Nanotechnologists Women materials scientists and engineers Polymer scientists and engineers University of São Paulo alumni Alumni of Imperial College London State University of Campinas alumni Academic staff of the State University of Campinas 21st-century Brazilian scientists 21st-century Brazilian women scientists Members of the Brazilian Academy of Sciences
Ana Flávia Nogueira
[ "Chemistry", "Materials_science", "Technology" ]
826
[ "Women materials scientists and engineers", "Physical chemists", "Electrochemistry", "Materials scientists and engineers", "Polymer chemistry", "Nanotechnology", "Polymer scientists and engineers", "Nanotechnologists", "Women in science and technology", "Electrochemists" ]
72,452,878
https://en.wikipedia.org/wiki/Sigma%20hole%20interactions
In chemistry, sigma hole interactions (or σ-hole interactions) are a family of intermolecular forces that can occur between several classes of molecules and arise from an energetically stabilizing interaction between a positively-charged site, termed a sigma hole, and a negatively-charged site, typically a lone pair, on different atoms that are not covalently bonded to each other. These interactions are usually rationalized primarily via dispersion, electrostatics, and electron delocalization (similar to Lewis-acid/base coordination) and are characterized by a strong directional preference that allows control over supramolecular chemistry. Molecular basis of interaction The basis of a sigma hole interaction is an energetically stabilizing interaction between a positively charged site (sigma hole) and a negatively charged site (lone pair) on different atoms. The positive site is produced by a covalent sigma bond between the atom hosting the sigma hole and a neighboring atom. The presence of the bond results in the distortion of the electron density around the host atom, with the density increasing equatorially (with respect to the bond) about the atom but decreasing along the extension of the bond. Through this mechanism, a region of positive electrostatic potential, termed a sigma hole, can be localized onto the surface of an atom bearing a sigma bond. This sigma hole could then engage in electrostatic interactions with a lone pair associated with a negative electrostatic potential. In addition to the electrostatic interaction described above, dispersive forces are also thought to play a role in the overall interaction. Studies have found electrostatic and dispersive contributions to be roughly comparable in magnitude, and for the dominant contributor to vary from system to system. Alternatively, sigma hole pair interactions can be conceptualized in terms of the mixing of molecular orbitals. The occupied sigma bonding orbital associated with the bond would give rise to a corresponding unoccupied sigma antibonding orbital lying on the opposite face of the atom. Mixing between the antibonding orbital and the occupied orbital associated with a lone pair would be expected to result in energetic stabilization. Several atoms, including those which are relatively electronegative (such as Chlorine, Oxygen, and even Fluorine) can act as positive sites in sigma hole pair interactions. Counterintuitively, this can occur even when the atom acting as the positive site has an overall negative partial charge. The solution to this apparent contradiction lies in the anisotropy in the electron cloud introduced by the presence of the sigma bond. If the electronic charge is not evenly distributed around the nucleus, it remains possible for a positive partial charge to develop opposite the sigma bond in the region of electron depletion. This partial positive charge coexists with a partial negative charge of larger magnitude associated with the more electron-rich regions of the atomic surface, which results in an overall negative partial charge. Characteristics Directionality Sigma hole interactions exhibit a strong preference for linearity. Theoretical studies have shown that the interaction is most stabilizing when the negative site is colinear with the bond that gives rise to the sigma hole. As the angle between this bond and the sigma hole interaction is decreased, the strength of the interaction is generally found to decrease rapidly. This finding is consistent with the hypothesis that the sigma hole arises from electronic anisotropy. There are cases in which the angle of interaction does differ somewhat from 180° - in these cases, the influence of additional intermolecular interactions are implicated in determining the overall geometry. Strength Consistent with Coulomb's law, There is a very strong relationship between the energetic stabilization associated with a sigma hole interaction and the product of the electrostatic potentials associated with the sigma hole and lone pair sites. Therefore, factors that increase the electrostatic potential of the sigma hole and decrease the electrostatic potential of the lone pair result in stronger interactions. The main structural factors contributing to the electrostatic potential of the sigma hole are the electronegativity of the host atom, the polarizability of the host atom, and the electron donating or withdrawing character of the group bonded to the host atom, with less electronegative and more polarizable host atoms bound to more electron withdrawing groups associated with the highest electrostatic potential. The table below shows the computed strength (in kcal/mol) of three selected sigma hole interactions at a variety of angles. At any angle, it can be observed that the interaction is stronger when the Bromine atom hosting the sigma hole is bound to a strongly electron withdrawing cyano group than when this atom is bound to a trifluoromethyl group, which is only moderately electron withdrawing. On the other hand, the interaction is stronger when an ammonia molecule provides the lone pair, as the electrostatic potential associated with this site is more negative than the corresponding site on hydrogen cyanide. In all cases, the interaction is becomes stronger at more linear angles. Stability While the formation of a sigma hole pair interaction is associated with energetic stabilization, this process is often thermodynamically disfavored as the energetic stabilization is often offset by a decrease in the entropy of the system. It has been shown that an enthalpy-entropy compensation relationship exists between the energetic and entropic changes associated with interactions, with more stabilizing interactions tending to result in larger entropy decreases. However, the decrease in entropy associated with the formation of a sigma hole interaction has been shown to approach a limiting value as the energetic favorability of the process is increased, and as such very energetically stabilizing interactions tend to be thermodynamically favored. There are additional factors that contribute to thermodynamic stability in the liquid and solid phases, which cannot be as easily modeled as gas phase interactions. As such, the favorability of a given sigma hole interaction in the liquid or solid phase may not necessarily match that of the gas phase equivalent. Geometry and vibrational spectra Atoms interacting via a sigma hole interactions are often closer than the sum of their van der waals radii. In addition, sigma hole interactions are also often associated with changes in the lengths and vibrational stretching frequencies of the covalent bond that gives rise to the sigma hole. Depending on the system engaging in the interaction, either a "blue shift", in which the bond contracts and the vibrational stretching frequency increases, or a "red shift", in which the bond lengthens and the vibrational stretching frequency decreases, is possible. The extent of these effects are related to the strength of the interaction, with stronger interactions tending to produce shorter interatomic distances between the interacting atoms and stronger red shifts. Scope The sigma hole formalism has been applied to a wide range of interactions involving electrostatic and dispersive attraction between positively and negatively charged sites. These interactions are typically classified according to the identity of the atom that hosts the positively charged site. Interaction types that are broadly accepted as subclasses of the sigma hole interaction include tetrel bonding (in which a sigma hole resides on an atom of group IV), pnictogen bonding (group V), chalcogen bonding (group VI), and halogen bonding (group VII). It remains a matter of some debate whether hydrogen bonding is best classified as a sigma hole interaction, in which the sigma hole lies on the Hydrogen atom, or as a distinct class of interactions. While hydrogen bonds and sigma hole interactions of groups IV-VII both exhibit directional preferences towards linearity, the ability of hydrogen bonds to deviate from an ideal 180° angle is much greater. On the other hand, it has been argued that the underlying mechanism dictating both interactions is identical, and the observed difference in orientational preference can be attributed to a difference in the shape of the sigma holes. Applications Sigma hole interactions have applications in a variety of fields. The ability to induce stabilizing and strongly directional intermolecular interactions which can be easily tuned via minor structural substitutions makes leveraging these interactions particularly value in fields in which control over supramolecular organization is desired. As such, sigma hole interactions have been used in the field of crystal engineering to design molecular building blocks for self-assembly, to improve the properties of liquid crystals, and to design magnetic materials. References Wikipedia Student Program Chemical bonding Supramolecular chemistry Molecular biology
Sigma hole interactions
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
1,687
[ "Condensed matter physics", "nan", "Molecular biology", "Biochemistry", "Nanotechnology", "Chemical bonding", "Supramolecular chemistry" ]
63,755,990
https://en.wikipedia.org/wiki/Quirin%20Vrehen
Quirinus Henricus Franciscus "Quirin" Vrehen (25 February 1932 – 5 February 2023) was a Dutch physicist. He served as head physicist of the Philips Natuurkundig Laboratorium. Life Vrehen was born on 25 February 1932 in 's-Hertogenbosch. He obtained a PhD in physics from Utrecht University in 1963 under professor Volger with a thesis on electron spin resonance and optical studies of solids. From 1963 to 1966 he worked in the United States at the MIT National Magnetic Laboratory. At the institute he worked on magneto-optical experiments on semiconductors. Vrehen then returned to the Netherlands and started working at the Philips Natuurkundig Laboratorium. At the laboratorium he served as leader of the spectroscopy group. From 1980 to 1985 he focused on laser spectroscopy. He later became head physicist at the laboratorium. Vrehen was elected member of the Royal Netherlands Academy of Arts and Sciences in 1988. He died on 5 February 2023 in Eindhoven. References 1932 births 2023 deaths 20th-century Dutch physicists Members of the Royal Netherlands Academy of Arts and Sciences People from 's-Hertogenbosch Spectroscopists Utrecht University alumni
Quirin Vrehen
[ "Physics", "Chemistry" ]
258
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
63,758,049
https://en.wikipedia.org/wiki/Yuga%20cycle
A Yuga Cycle ( chatur yuga, maha yuga, etc.) is a cyclic age (epoch) in Hindu cosmology. Each cycle lasts for 4,320,000 years (12,000 divine years) and repeats four yugas (world ages): Krita (Satya) Yuga, Treta Yuga, Dvapara Yuga, and Kali Yuga. As a Yuga Cycle progresses through the four yugas, each yuga's length and humanity's general moral and physical state within each yuga decrease by one-fourth. Kali Yuga, which lasts for 432,000 years, is believed to have started in 3102 BCE. Near the end of Kali Yuga, when virtues are at their worst, a cataclysm and a re-establishment of dharma occur to usher in the next cycle's Krita (Satya) Yuga, prophesied to occur by Kalki. There are 71 Yuga Cycles in a manvantara (age of Manu) and 1,000 Yuga Cycles in a kalpa (day of Brahma). Lexicology A Yuga Cycle has several names. Age or Yuga (): "Age" and "Yuga", sometimes with reverential capitalization, commonly denote a "", a cycle of four world ages, unless expressly limited by the name of one of its minor ages (e.g. Kali Yuga). Its archaic spelling is yug, with other forms of yugam, , and yuge, derived from yuj (), believed derived from (Proto-Indo-European: 'to join or unite'). Chatur Yuga (): A cyclic age encompassing the four yuga ages as defined in Hindu texts: Surya Siddhanta, Manusmriti, and Bhagavata Purana. Daiva Yuga (), Deva Yuga (), Divya Yuga (): A cyclic age of the divine, celestrial, or gods (Devas) encompassing the four yuga ages ( "human ages" or "world ages"). The Hindu texts give a length of 12,000 divine years, where a divine year lasts for 360 solar (human) years. Maha Yuga (): A greater cyclic age encompassing the smaller four yuga ages. Yuga Cycle () + (): A cyclic age encompassing the four yuga ages. It is theorized that the concept of the four yugas originated some time after the compilation of the four Vedas, but prior to the rest of the Hindu texts, based on the concept's absence in the former writings. It is believed that the four yugas—Krita (Satya), Treta, Dvapara, and Kali—are named after throws of an Indian game of long dice, marked with 4-3-2-1 respectively. A dice game is described in the Rigveda, Atharvaveda, Upanishads, Ramayana, Mahabharata, and Puranas, while the four yugas are described after the four Vedas with no mention of a correlation to dice. A complete description of the four yugas and their characteristics are in the Vishnu Smriti (ch. 20), Mahabharata (e.g. Vanaparva 149, 183), Manusmriti (I.81–86), and Puranas (e.g. Brahma, ch. 122–123; Matsya, ch. 142–143; Naradiya, Purvardha, ch. 41). The four yugas are also described in the Bhagavata Purana (3.11.18–20). Duration and structure Hindu texts describe four yugas (world ages) in a Yuga Cycle—Krita (Satya) Yuga, Treta Yuga, Dvapara Yuga, and Kali Yuga—where, starting in order from the first age, each yuga's length decreases by one-fourth (25%), giving proportions of 4:3:2:1. Each yuga is described as having a main period ( yuga proper) preceded by its (dawn) and followed by its (dusk), where each twilight (dawn/dusk) lasts for one-tenth (10%) of its main period. Lengths are given in divine years (years of the gods), each lasting for 360 solar (human) years. Each Yuga Cycle lasts for 4,320,000 years (12,000 divine years) with its four yugas: Krita (Satya) Yuga for 1,728,000 (4,800 divine) years, Treta Yuga for 1,296,000 (3,600 divine) years, Dvapara Yuga for 864,000 (2,400 divine) years, and Kali Yuga for 432,000 (1,200 divine) years. The current cycle's four yugas have the following dates based on Kali Yuga, the fourth and present age, starting in 3102 BCE: Mahabharata, Book 12 (Shanti Parva), Ch. 231: Manusmriti, Ch. 1: Surya Siddhanta, Ch. 1: Greater cycles There are 71 Yuga Cycles (306,720,000 years) in a manvantara, a period ruled by Manu, who is the progenitor of mankind. There are 1,000 Yuga Cycles (4,320,000,000 years) in a kalpa, a period that is a day (12-hour day proper) of Brahma, who is the creator of the planets and first living entities. There are 14 manvantaras (4,294,080,000 years) in a kalpa with a remainder of 25,920,000 years assigned to 15 manvantara-sandhyas (junctures), each the length of a Satya Yuga (1,728,000 years). A kalpa is followed by a pralaya (night or partial dissolution) of equal length forming a full day (24-hour day). A maha-kalpa (life of Brahma) lasts for 100 360-day years of Brahma, which lasts for 72,000,000 Yuga Cycles (311.04 trillion years) and is followed by a maha-pralaya (full dissolution) of equal length. We are currently halfway through Brahma's life (maha-kalpa): 51st year of 100 (2nd half or parardha) 1st month of 12 1st kalpa (Shveta-Varaha Kalpa) of 30 7th manvantara (Vaivasvatha Manu) of 14 28th chatur-yuga ( Yuga Cycle) of 71 4th yuga (Kali Yuga) of 4 Yuga dates are used in an ashloka, which is read out at the beginning of Hindu rites to specify the elapsed time in Brahma's life. For example, an ashloka in 2007CE of the Gregorian calendar might include the lines: Avatars Ganesha Ganesha avatars are described as coming during specific yugas. Vishnu The Puranas describe Vishnu avatars that come during specific yugas, but may not occur in every Yuga Cycle. Vamana appears at the beginning of Treta Yuga. According to Vayu Purana, Vamana's 3rd appearance was in the 7th Treta Yuga. Rama appears at the end of Treta Yuga. According to Vayu Purana and Matsya Purana, Rama appeared in the 24th Yuga Cycle. According to Padma Purana, Rama also appeared in the 27th Yuga Cycle of the 6th (previous) manvantara. Vyasa Vyasa is attributed as the compiler of the four Vedas, Mahabharata, and Puranas. According to the Vishnu Purana, Kurma Purana, and Shiva Purana, a different Vyasa comes at the end of each Dvapara Yuga to write down veda (knowledge) to guide humans in the degraded age of Kali Yuga. Modern theories Breaking from the long duration of a Yuga Cycle, new theories have emerged regarding the length, number, and order of the yugas. Sri Yukteswar Giri Swami Sri Yukteswar Giri (1855–1936) proposed a Yuga Cycle of 24,000 years in the introduction of his book The Holy Science (1894). He claimed the understanding that Kali Yuga lasts for 432,000 years was a mistake, which he traced back to Raja Parikshit, just after the descending Dvapara Yuga ended ( 3101BCE) and all the wise men of his court retired to the Himalaya Mountains. With no one left to correctly calculate the ages, Kali Yuga never officially started. After 499CE, in ascending Dvapara Yuga, when the intellect of men began to develop, but not fully, they noticed mistakes and attempted to correct them by converting what they thought to be divine years to human years (1:360 ratio). Yukteswar's yuga lengths for Satya, Treta, Dvapara, and Kali are respectively 4,800, 3,600, 2,400, and 1,200 "human" years (12,000 years total). He accepted the four yugas and their 4:3:2:1 length and dharma proportions, but his Yuga Cycle contained eight yugas, the original descending set of the four yugas followed by an ascending (reversed) set, where he called each set a "Daiva Yuga" or "Electric Couple". His Yuga Cycle lasts for 24,000 years, which he believed equals one precession of the equinoxes (traditionally 25,920 years; 1,920 years difference). He states that the world entered the Pisces-Virgo Age in 499CE ("cycle bottom"), and that the current age of ascending Dvapara Yuga started in 1699CE around the time of scientific discoveries and advancements such as electricity. He explained that in a 24,000-year Yuga Cycle, the Sun completes one orbit around some dual star, becoming nearer and farther to a galactic center, which the pair orbit in a longer period. He called this galactic center Vishnunabhi (Vishnu's Navel), where Brahma regulates dharma or, as Yukteswar defined it, mental virtue. Dharma is lowest when farthest from Brahma at the descending-ascending intersection ("cycle-bottom"), where the opposite occurs at the "cycle-top" when nearest. At dharma's lowest (499CE), human intellect cannot comprehend anything beyond the gross material world. Joscelyn Godwin states that Yukteswar believed the traditional chronology of the yugas wrong and rigged for political reasons, but that Yukteswar may have had political reasons of his own, evident in a police report printed in Atlantis and the Cycles of Time, which links Yukteswar to a secret anti-colonial movement called Yugantar, meaning "new age" or "transition of an epoch". Godwin claims the Jain time cycle and the European myth of progress influenced Yukteswar, whose theory only recently became prominent outside India. Humanity in an upward cycle is contrary to traditional ideas. Godwin points out many philosophies and religions that started during a time when "man could not see beyond the gross material world" (701BCE1699CE). Only materialists and atheists would welcome the post-1700 age as an improvement. John Major Jenkins, who adjusted ascending Kali Yuga from 499CE to 2012 in his version, criticizes Yukteswar as wanting the "cycle-bottom" to correspond to his education, beliefs, and historical understanding. Technology has thrust us deeper into material dependency and spiritual darkness. René Guénon René Guénon (1886–1951) proposed a Yuga Cycle of 64,800 years in his 1931 French article, which was later translated in the book Traditional Forms & Cosmic Cycles (2001). Guénon accepted the doctrine of the four yugas, the 4:3:2:1 yuga length proportions, and Kali Yuga as the present age. He couldn't accept the extremely large lengths and felt they were encoded with additional zeros to mislead those who might use it to predict the future. He reduced a Yuga Cycle from 4,320,000 to 4,320 years (1,728 + 1,296 + 864 + 432), but he felt this was too short for humanity's history. In looking for a multiplier, he worked backwards from the precession of the equinoxes (traditionally 25,920 years; 360 72-year degrees). Using 25,920 and 72, he calculated the sub-multiplier to be 4,320 years (72 × 60 = 4,320; 4,320 × 6 = 25,920). In noticing the "great year" of the Persians (~12,000) and Greeks (~13,000) as almost half the precession, he concluded a "great year" must be 12,960 years (4,320 × 3). In trying to find the whole number of "great years" in a manvantara or reign of Vaivasvata Manu, he found the reign of Xisuthros of the Chaldeans to be set to 64,800 years (12,960 × 5), someone he thought to be the same Manu. Guénon felt 64,800 years was a more plausible length that may line up with humanity's history. He calculated a 64,800 manvantara divided into a 4,320 "encoded" Yuga Cycle gave a multiplier of 15 (5 "great years"). Using 15 as the multiplier, he "decoded" a 5-"great year" Yuga Cycle as having the following yuga lengths: Satya: 25,920 (4 ratio or 2 × "great year"; 15 × 1,728) Treta: 19,440 (3 ratio or 1.5 × "great year"; 15 × 1,296) Dvapara: 12,960 (2 ratio or 1 × "great year"; 15 × 864) Kali: 6,480 (1 ratio or 0.5 × "great year"; 15 × 432) Guénon did not give a start date for Kali Yuga, but instead left clues in his description of the cataclysmic destruction of the Atlantean civilization. His commentator, Jean Robin, in an early 1980s publication, claimed to have decoded this description and calculated that Kali Yuga lasted from 4481BCE to 1999CE (2000CE excluding year 0). In Les Quatre Âges de L’Humanité (The Four Ages of Humanity), a book written in 1949 by Gaston Georgel, this same end date of 1999 CE was calculated; although, in his 1983 book titled Le Cycle Judéo-Chrétien (The Judeo-Christian Cycle), he later argued to shift the cycle forward by 31 years to end in 2030 CE. Alain Daniélou Alain Daniélou (1907–1994) proposed a Yuga Cycle of 60,487 years in his book While the Gods Play: Shaiva Oracles and Predictions on the Cycles of History and the Destiny of Mankind (1985). Daniélou and René Guénon had some correspondence where they both couldn't accept the extremely large lengths found in the Puranas. Daniélou mostly cited Linga Purana and his calculations are based on a 4,320,000-year Yuga Cycle containing (his calculation of 1000 ÷ 14) 71.42 manvantaras, each containing 4 yugas [4:3:2:1 proportions]. He pegged 3102BCE as the start of Kali Yuga and placed it after the dawn (yuga-sandhya). He claimed his dates are accurate to within 50 years, and that the Yuga Cycle started with a great flood and appearance of Cro-Magnon man, and will end with a catastrophe wiping out mankind. Joscelyn Godwin found that Daniélou's misunderstanding rests solely on a bad translation of Linga Purana 1.4.7. Hindu astronomy In the early texts of Hindu astronomy such as Surya Siddhanta, the length of a yuga cycle is used to specify the orbital period of heavenly bodies. Instead of specifying the period of a single orbit of a heavenly body around the Earth, the number of orbits of a heavenly body in a yuga cycle is specified. Surya Siddhanta, Ch. 1: The orbital period of heavenly bodies can be derived from the above numbers provided the starting point of a yuga cycle is known. According to Burgess, the Surya Siddhanta fixes the starting point of Kali Yuga as: Based on this starting point, Ebenezer Burgess calculates the following planetary orbital periods: Other cultures According to Robert Bolton, there is a universal belief in many traditions that the world started in a perfect state, when nature and the supernatural were still in harmony with all things in their fullest degree of perfection possible, which was followed by an unpreventable constant deterioration of the world through the ages. In the Works and Days (lines 109–201; 700BCE), considered the earliest European writing about human ages, the Greek poet Hesiod describes five ages (Golden, Silver, Bronze, Heroic, and Iron Ages), where the Heroic Age was added, according to Godwin, as a compromise with Greek history when the Trojan War and its heroes loomed so large. Bolton explains that the men of the Golden Age lived like gods without sorrow, toil, grief, and old age, while the men of the Iron Age ("the race of iron") never rest from labor and sorrow, are degenerated without shame, morality, and righteous indignation, and have short lives with frequent deaths at night, where even a new-born baby shows signs of old age, only to end when Zeus destroys it all. In the Statesman (), the Athenian philosopher Plato describes time as an indefinite cycle of two 36,000-year halves: (1) the world's unmaking descent into chaos and destruction; (2) the world's remaking by its creator into a renewed state. In the Cratylus (397e), Plato recounts the golden race of men who came first, who were noble and good daemons (godlike guides) upon the earth. In the Metamorphoses (I, 89–150; 8BCE), the Roman poet Ovid describes four ages (Golden, Silver, Bronze, and Iron Ages), excluding Hesiod's Heroic Age, as a downward curve with the present time as the nadir of misery and immorality, according to Godwin, affecting both human life and the after-death state, where deaths in the first two ages became immortal, watchful spirits that benefited the human race, deaths in the third age went to Hades (Greek god of the underworld), and deaths in the fourth age had an unknown fate. Joscelyn Godwin posits that it is probably from Hindu tradition that knowledge of the ages reached the Greeks and other Indo-European peoples. Godwin adds that the number 432,000 (Kali Yuga's duration) occurring in four widely separated cultures (Hindu, Chaldean, Chinese, and Icelandic) has long been noticed. See also Hindu eschatology Hindu units of time Kalpa (day of Brahma) Manvantara (age of Manu) Pralaya (period of dissolution) Yuga Cycle (four yuga ages): Satya (Krita), Treta, Dvapara, and Kali Itihasa (Hindu Tradition) List of numbers in Hindu scriptures Vedic-Puranic chronology Explanatory notes References Four Yugas Hindu astronomy Hindu philosophical concepts Time in Hinduism Units of time
Yuga cycle
[ "Physics", "Mathematics" ]
4,221
[ "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
63,761,382
https://en.wikipedia.org/wiki/Mochii
Mochii is a miniature scanning electron microscope made by Seattle-based startup company Voxa. The Mochii has the same capabilities as a conventional SEM, such as usage in materials science for research purposes, microchip and semiconductor quality control, and medicine. Mochii users are able to operate the microscope using an IOS app. History Development of what ended up being the Mochii began in 2012. The goal of the Mochii was to take scanning electron microscopes, conventionally large, expensive, and unwieldy tools, and shrink them down in order to decrease cost and increase portability. In 2015, Voxa began collaborating with NASA who saw the potential of taking the Mochii to space. In the last few years, NASA has provided upwards of $450,000 for the development of the Mochii. The Mochii had to confront issues unique to space-based operation such as "errant fluid behavior, residual gravity gradients, cosmic rays, and safety of flight". In 2018, the Mochii won the Microscopy Today Innovation Award, an industry award given for inventions that make microscopy more efficient and powerful. In June 2019, the Mochii participated in the 23rd NEEMO (NASA Extreme Environment Mission Operations) mission. On February 15, 2020, the Mochii launched on the Cygnus cargo spacecraft, headed to the ISS. Voxa's microscope is supposed to help with on-site imaging at the ISS, this eliminates the need for sending the sample back down to Earth which has the issues of cost, time, and potential sample damage. Specifications The Mochii measures and weighs around 6 pounds. The SEM's stage measures . The Mochii has a swap-able optical cartridge that eliminates the need for in-person servicing. The cartridge has a source potential of 10 kV, a 5000x magnification, a backscatter array detector, and auto-calibration. The microscope is capable of EDS, a technique which analyzes the energy spectrum of a sample in order to find out the abundance of certain elements. The Mochii comes outfitted with an app that runs on Apple devices that run IOS 8 or higher. References Scientific equipment Microscopes Scanning probe microscopy
Mochii
[ "Chemistry", "Materials_science", "Technology", "Engineering" ]
462
[ "Measuring instruments", "Microscopes", "Scanning probe microscopy", "Microscopy", "Nanotechnology" ]
63,765,255
https://en.wikipedia.org/wiki/Sarbeswar%20Bujarbarua
Sarbeswar Bujarbarua is an Indian physicist. Educated at Gujarat University and Gauhati University, he founded the Centre of Plasma Physics as part of the Institute for Plasma Research and served as its director for 30 years. References 1948 births Living people Indian physicists Plasma physicists Space scientists Gujarat University alumni
Sarbeswar Bujarbarua
[ "Physics" ]
64
[ "Plasma physicists", "Plasma physics" ]
65,309,248
https://en.wikipedia.org/wiki/List%20of%20topologies
The following is a list of named topologies or topological spaces, many of which are counterexamples in topology and related branches of mathematics. This is not a list of properties that a topology or topological space might possess; for that, see List of general topology topics and Topological property. Discrete and indiscrete Discrete topology − All subsets are open. Indiscrete topology, chaotic topology, or Trivial topology − Only the empty set and its complement are open. Cardinality and ordinals Cocountable topology Given a topological space the on is the topology having as a subbasis the union of and the family of all subsets of whose complements in are countable. Cofinite topology Double-pointed cofinite topology Ordinal number topology Pseudo-arc Ran space Tychonoff plank Finite spaces Discrete two-point space − The simplest example of a totally disconnected discrete space. Finite topological space Pseudocircle − A finite topological space on 4 elements that fails to satisfy any separation axiom besides T0. However, from the viewpoint of algebraic topology, it has the remarkable property that it is indistinguishable from the circle Sierpiński space, also called the connected two-point set − A 2-point set with the particular point topology Integers Arens–Fort space − A Hausdorff, regular, normal space that is not first-countable or compact. It has an element (i.e. ) for which there is no sequence in that converges to but there is a sequence in such that is a cluster point of Arithmetic progression topologies The Baire space − with the product topology, where denotes the natural numbers endowed with the discrete topology. It is the space of all sequences of natural numbers. Divisor topology Partition topology Deleted integer topology Odd–even topology Fractals and Cantor set Apollonian gasket Cantor set − A subset of the closed interval with remarkable properties. Cantor dust Cantor space Koch snowflake Menger sponge Mosely snowflake Sierpiński carpet Sierpiński triangle Smith–Volterra–Cantor set, also called the − A closed nowhere dense (and thus meagre) subset of the unit interval that has positive Lebesgue measure and is not a Jordan measurable set. The complement of the fat Cantor set in Jordan measure is a bounded open set that is not Jordan measurable. Orders Alexandrov topology Lexicographic order topology on the unit square Order topology Lawson topology Poset topology Upper topology Scott topology Scott continuity Priestley space Roy's lattice space Split interval, also called the and the − All compact separable ordered spaces are order-isomorphic to a subset of the split interval. It is compact Hausdorff, hereditarily Lindelöf, and hereditarily separable but not metrizable. Its metrizable subspaces are all countable. Specialization (pre)order Manifolds and complexes Branching line − A non-Hausdorff manifold. Double origin topology E8 manifold − A topological manifold that does not admit a smooth structure. Euclidean topology − The natural topology on Euclidean space induced by the Euclidean metric, which is itself induced by the Euclidean norm. Real line − Unit interval − Extended real number line Fake 4-ball − A compact contractible topological 4-manifold. House with two rooms − A contractible, 2-dimensional simplicial complex that is not collapsible. Klein bottle Lens space Line with two origins, also called the − It is a non-Hausdorff manifold. It is locally homeomorphic to Euclidean space and thus locally metrizable (but not metrizable) and locally Hausdorff (but not Hausdorff). It is also a T1 locally regular space but not a semiregular space. Prüfer manifold − A Hausdorff 2-dimensional real analytic manifold that is not paracompact. Real projective line Torus 3-torus Solid torus Unknot Whitehead manifold − An open 3-manifold that is contractible, but not homeomorphic to Hyperbolic geometry Gieseking manifold − A cusped hyperbolic 3-manifold of finite volume. Horosphere Horocycle Picard horn Seifert–Weber space Paradoxical spaces Lakes of Wada − Three disjoint connected open sets of or that all have the same boundary. Unique Hantzsche–Wendt manifold − A compact, orientable, flat 3-manifold. It is the only closed flat 3-manifold with first Betti number zero. Related or similar to manifolds Dogbone space Dunce hat (topology) Hawaiian earring Long line (topology) Rose (topology) Embeddings and maps between spaces Alexander horned sphere − A particular embedding of a sphere into 3-dimensional Euclidean space. Antoine's necklace − A topological embedding of the Cantor set in 3-dimensional Euclidean space, whose complement is not simply connected. Irrational winding of a torus/Irrational cable on a torus Knot (mathematics) Linear flow on the torus Space-filling curve Torus knot Wild knot Counter-examples (general topology) The following topologies are a known source of counterexamples for point-set topology. Alexandroff plank Appert topology − A Hausdorff, perfectly normal (T6), zero-dimensional space that is countable, but neither first countable, locally compact, nor countably compact. Arens square Bullet-riddled square - The space where is the set of bullets. Neither of these sets is Jordan measurable although both are Lebesgue measurable. Cantor tree Comb space Dieudonné plank Double origin topology Dunce hat (topology) Either–or topology Excluded point topology − A topological space where the open sets are defined in terms of the exclusion of a particular point. Fort space Half-disk topology Hilbert cube − with the product topology. Infinite broom Integer broom topology K-topology Knaster–Kuratowski fan Long line (topology) Moore plane, also called the − A first countable, separable, completely regular, Hausdorff, Moore space that is not normal, Lindelöf, metrizable, second countable, nor locally compact. It also an uncountable closed subspace with the discrete topology. Nested interval topology Overlapping interval topology − Second countable space that is T0 but not T1. Particular point topology − Assuming the set is infinite, then contains a non-closed compact subset whose closure is not compact and moreover, it is neither metacompact nor paracompact. Rational sequence topology Sorgenfrey line, which is endowed with lower limit topology − It is Hausdorff, perfectly normal, first-countable, separable, paracompact, Lindelöf, Baire, and a Moore space but not metrizable, second-countable, σ-compact, nor locally compact. Sorgenfrey plane, which is the product of two copies of the Sorgenfrey line − A Moore space that is neither normal, paracompact, nor second countable. Topologist's sine curve Tychonoff plank Vague topology Warsaw circle Topologies defined in terms of other topologies Natural topologies List of natural topologies. Adjunction space Disjoint union (topology) Extension topology Initial topology Final topology Product topology Quotient topology Subspace topology Weak topology Compactifications Compactifications include: Alexandroff extension Projectively extended real line Bohr compactification Eells–Kuiper manifold Projectively extended real line Stone–Čech compactification Stone topology Stone–Čech remainder Wallman compactification Topologies of uniform convergence This lists named topologies of uniform convergence. Compact-open topology Loop space Interlocking interval topology Modes of convergence (annotated index) Operator topologies Pointwise convergence Weak convergence (Hilbert space) Weak* topology Polar topology Strong dual topology Topologies on spaces of linear maps Other induced topologies Box topology Compact complement topology Duplication of a point: Let be a non-isolated point of let be arbitrary, and let Then is a topology on and and have the same neighborhood filters in In this way, has been duplicated. Extension topology Functional analysis Auxiliary normed spaces Finest locally convex topology Finest vector topology Helly space Mackey topology Polar topology Vague topology Operator topologies Dual topology Norm topology Operator topologies Pointwise convergence Weak convergence (Hilbert space) Weak* topology Polar topology Strong dual space Strong operator topology Topologies on spaces of linear maps Ultrastrong topology Ultraweak topology/weak-* operator topology Weak operator topology Tensor products Inductive tensor product Injective tensor product Projective tensor product Tensor product of Hilbert spaces Topological tensor product Probability Émery topology Other topologies Erdős space − A Hausdorff, totally disconnected, one-dimensional topological space that is homeomorphic to Half-disk topology Hedgehog space Partition topology Zariski topology See also Citations References External links π-Base: An Interactive Encyclopedia of Topological Spaces General topology Mathematics-related lists Topological spaces
List of topologies
[ "Physics", "Mathematics" ]
1,845
[ "General topology", "Mathematical structures", "Space (mathematics)", "Topological spaces", "Topology", "Space", "Geometry", "Spacetime" ]
65,310,418
https://en.wikipedia.org/wiki/Tropical%20Cyclone%20Heat%20Potential
Tropical Cyclone Heat Potential (TCHP) is one of such non-conventional oceanographic parameters influencing the tropical cyclone intensity. The relationship between Sea Surface Temperature (SST) and cyclone intensity has been long studied in statistical intensity prediction schemes such as the National Hurricane Center Statistical Hurricane Intensity Prediction Scheme (SHIPS) and Statistical Typhoon Intensity Prediction Scheme (STIPS). STIPS is run at the Naval Research Laboratory in Monterey, California, and is provided to Joint Typhoon Warning Centre (JTWC) to make cyclone intensity forecasts in the western North Pacific, South Pacific, and Indian Oceans. In most of the cyclone models, SST is the only oceanographic parameter representing heat exchange. However, cyclones have long been known to interact with the deeper layers of ocean rather than sea surface alone. Using a coupled ocean atmospheric model, Mao et al., concluded that the rate of intensification and final intensity of cyclone were sensitive to the initial spatial distribution of the mixed layer rather than to SST alone. Similarly, Namias and Canyan observed patterns of lower atmospheric anomalies being more consistent with the upper ocean thermal structure variability than SST. References Oceanography Oceanographic instrumentation
Tropical Cyclone Heat Potential
[ "Physics", "Technology", "Engineering", "Environmental_science" ]
239
[ "Hydrology", "Oceanographic instrumentation", "Applied and interdisciplinary physics", "Oceanography", "Measuring instruments" ]
65,313,388
https://en.wikipedia.org/wiki/Sturgeon%20Refinery
The Sturgeon Refinery also NWR Sturgeon Refinery is an bitumen refinery built and operated by North West Redwater Partnership (NWRP) in a public-private partnership with the Alberta provincial government. It is located in Sturgeon County northeast of Edmonton, Alberta, in Alberta's Industrial Heartland. Premier Jason Kenney announced on July 6, 2021, that the province of Alberta had acquired NWRP's equity stake, representing 50% of the $10-billion project, with the other 50% owned by Canadian Natural Resources. Ownership and organization The Sturgeon refinery is owned and operated by the Canadian Natural Resources Ltd. and the Alberta government. On July 6, 2021 Premier Jason Kenney announced that the province of Alberta had acquired a 50% "equity stake" in the Sturgeon Refinery through the APMC, which now owns the "stake previously owned by Calgary-based North West Refining Inc." In the Financial Post article reporting the acquisition, the refinery was described as "over-budget and behind-schedule". Previously, the NWRP/Sturgeon Refinery Contractual and Ownership Structure consisted of three main parties who entered into a public private partnership agreement—Canadian Natural Resources, North West Refining Inc and the Government of Alberta's Crown corporation, Alberta Petroleum Marketing Commission (APMC). According to their agreement as described in the 2018 report by the Office of the Auditor General of Alberta, the APMC—which is responsible for the implementation of Alberta's Bitumen Royalty-in-Kind (BRIK) policy and processing agreements, has a financial obligation to supply 75% of feedstock to the refinery, take on 75% of the funding commitment of toll obligation, and 75% of subordinated debt. The toll obligation which the pays, is a processing fee or toll for each barrel of bitumen refined. This includes an operating toll, a debt toll, an equity toll, and an incentive fee. The original assessment included a capital cost cap of $6.5 billion. In return, APMC can collect Bitumen Royalty-in-Kind (BRIK) when the refinery is fully operational. Under the agreement, Canadian Natural Resources Partnership (CNR), which is 100% owned by Canadian Natural Resources Limited (CNRL), and which has 50% ownership of North West Redwater Partnership (NWRP), provides 25% of feedstock and 25% toll obligation. North West Refining Inc. owns the other half of North West Redwater Partnership (NWRP) through two subsidiaries—North West Upgrading LP (NWU) and North West Phase One Inc. The North West Redwater Holding Corporation and the NWR Financing Company Lts are both 100% owned by North West Redwater Partnership (NWRP). A February 2018 report by the Office of the Auditor General of Alberta entitled "APMC Management of Agreement to Process Bitumen at the Sturgeon Refinery", said that the original agreement between the Alberta government and North West Redwater Partnership (NWRP) resulted in the province taking on "many of the risks as if it were building the refinery as a 75 per cent tollpayer in this arrangement". The APMC has only one vote representing 25% of decision-making power in the partnership, while the two private companies together hold 75% of the decision-making power. In contrast, in regards to the $CDN26 billion in toll payments to be made over a thirty-year period APMC is responsible for 75% while CNRL is responsible for the rest. Because of the "unconditional nature of the debt component of the toll payments", a "substantial amount of the risk was transferred to the province" when APMC entered into these agreement. The AG's report described the arrangement between Alberta's provincial government and the NWRP, as "high-benefit" and "high-risk"—a "$26 billion commitment on behalf of the government to supply bitumen feedstock to the NWR Sturgeon refinery over a thirty year period. When the Department of Energy and the APMC acknowledged that taking bitumen-in-kind was neither "practical or cost-efficient", the APMC entered into contracts with bitumen suppliers to provide the 75% feedstock to fulfill their commitment to the refinery. In effect, the APMC is purchasing bitumen instead of collecting bitumen-in-kind royalties. During construction, the APMC CEO and some staff managed the contract itself; NWRP, with its 400 staff members, oversaw the actual construction and "risk management activities". Alberta’s Industrial Heartland The 2017, Alberta's Industrial Heartland Association's website, listed NWRP's Sturgeon Refiner as one of the major energy projects in the Heartland—"Canada’s largest hydrocarbon processing center" with over forty companies. The Heartland's geographic region encompasses its 5 five municipal partners, the City of Fort Saskatchewan, Lamont County, Strathcona County, Sturgeon County, and the City of Edmonton. Carbon capture and storage (CCS) According to Global News, the $CDN1.2 billion, Alberta Carbon Trunk Line System (ACTL), a -pipeline which came online on June 2, 2020, is part of NWRP's Sturgeon refinery system. The ACTL is a "major carbon capture project", according to the NWRP, and is the Alberta's "largest carbon capture and storage system". The ACTL, which was partially financed through federal government programs and the Canada Pension Plan Investment Board (CPPIB), is owned and operated by Enhance Energy and Wolf Midstream. The ACTL captures carbon dioxide from industrial emitters in the Industrial Heartland region, like the Sturgeon refinery, and transports it to "central and southern Alberta for secure storage" in "aging reservoirs", and enhanced oil recovery (EOR) projects. Products According to NWR Sturgeon refinery's website, operations include upgrading bitumen from the Athabasca oil sands into ultra-low-sulfur diesel. Other finished products include "high quality recycled and manufactured diluents" used in the process of extracting bitumen in Alberta, "pure naptha", used in "petrochemical processes or as part of the manufactured diluent pool", "low sulphur" vacuum gas oil (VGO)", that can be used as "intermediate feedstock in refineries", butane, and propane. Background The September 18, 2007 Alberta government commissioned report, entitled "Our Fair Share", by the Alberta Royalty Review panel had concluded that bitumen royalty rates and formulas had "not kept pace with changes in the resource base and world energy markets" and as a result, Albertans, who own their natural resources, were not receiving their "fair share" from energy development. In 2008, the global price of oil reached its peak all-time high of $USD145 a barrel, but later in 2008, during the financial crisis of 2007–2008, oil prices had plummeted to $32 a barrel resulting in "the cancellation of many energy projects" in Alberta. In response to Review, which the then Progressive Conservative Association of Alberta Premier Ed Stelmach had commissioned, the Alberta government enacted new regulations under the provincial Alberta Mines and Minerals Act at that were identified in the Alberta Royalty Framework. The 2007 Alberta Royalty Framework identified the need for a Bitumen Royalty-in-Kind (BRIK) option, allowing the government to choose how the Crown could collect its bitumen royalty share of "conventional crude oil production"—in cash or in kind. Through BRIK, the Crown could use its share of bitumen royalties "strategically", to "enhance Alberta’s value-add activities such as upgrading, refining, and petrochemical development", to Alberta's economy, and to hedge risks in the commodity market. Under the new royalty formulas, the government had anticipated revenue of $2 billion annually. On July 21, 2009 Stelmach's provincial government released a BRIK Request for Proposals (RFP) to "procure a long-term contract to process or purchase a share of royalty volumes of bitumen". The only proposal was that submitted by North West Upgrading LP (NWU). After receiving a report from the NWU proposal evaluation team in April 2010, which warned that the agreement placed a "disproportionate risk" on Alberta's government, the NWRP and AMPC agreement was signed in February 2011. A private consortium North West Redwater Partnership (NWRP) was "selected to construct and operate" the Sturgeon Refinery. Originally the estimate for capital costs for the project was $5.7 billion By 2011, the estimate had increased to $6.5 billion. In 2012, the construction of Phase 1 of the Sturgeon Refinery was sanctioned. In its announcement, NWRP said that the refinery was to be built, owned and operated by NWRP. Originally, the Sturgeon upgrader was supposed to be fully operational by October 2016. 2014 APMC $CDN324 loan to NWRP In January 2014, under then Premier Jim Prentice, the Building New Petroleum Markets Act was passed, allowing the Minister of Energy to provide loans to projects, like the NWRP's Sturgeon Refinery. When the APMC, the NWU and CNRL reached an amended agreement in April 2014, the APMC providing a $CDN324 million loan to NWRP. By May 2017, the expected completion date was delayed until June 2018. As a result, the Ministry of Energy updated the estimate for the refinery's capital cost to $9.4 billion. The delay and resulting cost increases represented an additional $CDN95 million loan to NWRP by the APMC. In 2017, Sturgeon Refinery began producing diesel from synthetic crude upgraded Alberta oilsands feedstock, and by November 2018, was producing about 35,000 to 40,000 barrels per day of diesel. The heavily discounted price of "stranded Alberta heavy oil" resulted in deep discounts for the refineries feedstock—as much as US$30 per barrel less than usual. In 2017, NWRP proceeded with phase one of the refinery capable of upgrading bitumen at a rate of 50,000 barrels a day. with the cost estimated at $CDN9.7 billion. Because of the onerous obligations under the agreement, in June 2018, the provincial New Democratic Party (NDP) under Premier Rachel Notley, had to begin to pay "75 per cent of the debt-servicing costs related to financing of the project." Even though no revenue had been generated for Alberta by the Sturgeon Refinery, the Alberta Petroleum Marketing Commission (APMC)—a Crown corporation responsible for the "implementation of BRIK policy, processing agreements", had "been making payments averaging $27 million a month related to the financing" the $9.9-billion Sturgeon Refinery, which represents approximately "$466 million in debt-servicing costs" since 2018—tied to the government's "commitments" to the project. By March 2020, due to start up issues, the refinery was not "processing the government’s bitumen at the facility — or generating revenue for the province from its refining operations" according to a Calgary Herald article. By March 2020, the capital costs of the project had climbed to about $10 billion. It took fifteen years, but in May 2020 founder, president and CEO of North West Refining, Ian McGregor, announced that the Sturgeon Refinery was fully operational and had reached commercial operations, as the transition from "primarily processing synthetic crude feedstock to bitumen feedstock" had been successful. Because of the agreement made by the former Progressive Conservative Association of Alberta government with North West Redwater Partnership (NWRP) in 2009, the current United Conservative Party (UCP) provincial government is responsible for continuing the debt-servicing costs that have been paid since June 2018, as well as an added cost of "debt principal repayments of about $21 million a month, on top of the debt-servicing costs," starting in June 2020. This increase in payments comes against the backdrop of the collapse of global oil prices precipitated by interconnecting and unprecedented global events—the 2020 coronavirus pandemic, the COVID-19 recession, the 2020 stock market crash, and the 2020 Russia–Saudi Arabia oil price war, which Premier Jason Kenney called—"the greatest challenge" in Alberta's "modern history, threatening its main industry and wreaking havoc on its finances." APMC reported in its annual 2020 report on the loans and agreements with NWRP's Sturgeon Refinery project, that the NWRP's Sturgeon Refinery project, had a "negative $CDN2.52 billion net present value" based mainly on "pricing and on-stream factor". See also Husky Lloydminster Refinery, Lloydminster (Husky Energy), Scotford Upgrader, Strathcona County (Shell Oil Company), Strathcona Refinery, Strathcona County (Imperial Oil), Sturgeon Refinery, Sturgeon County (North West Redwater Partnership — Canadian Natural Resources and North West Refineries), Suncor Edmonton Refinery, Strathcona County (Suncor Energy), Notes References Oil refineries in Alberta Petroleum industry in Alberta Buildings and structures in Alberta 2011 establishments in Alberta Petroleum technology Bituminous sands of Canada Sturgeon County Energy infrastructure completed in 2020 Government of Alberta
Sturgeon Refinery
[ "Chemistry", "Engineering" ]
2,744
[ "Petroleum engineering", "Petroleum technology" ]
65,314,772
https://en.wikipedia.org/wiki/Chemical%20purity
In chemistry, chemical purity is the measurement of the amount of impurities found in a sample. Several grades of purity are used by the scientific, pharmaceutical, and industrial communities. Some of the commonly used grades of purity include: ACS grade is the highest level of purity, and meets the standards set by the American Chemical Society (ACS). The official descriptions of the ACS levels of purity is documented in the Reagent Chemicals publication, issued by the ACS. It is suitable for food and laboratory uses. Reagent grade is almost as stringent as the ACS grade. USP grade meets the purity levels set by the United States Pharmacopeia (USP). USP grade is equivalent to the ACS grade for many drugs. NF grade is a purity grade set by the National Formulary (NF). NF grade is equivalent to the ACS grade for many drugs. British Pharmacopoeia: Meets or exceeds requirements set by the British Pharmacopoeia (BP). Can be used for food, drug, and medical purposes, and also for most laboratory purposes. Japanese Pharmacopeia: Meets or exceeds requirements set by the Japanese Pharmacopoeia (JP). Can be used for food, drug, and medical purposes, and also for most laboratory purposes. Laboratory grade is suitable for use in educational settings, but is not acceptable for food or drug use. Purified grade is not precisely defined, and it is not suitable for drug or food usage. Technical grade is suitable for industrial applications, but is not acceptable for food or drug use. References Materials Chemical tests Environmental chemistry Adulteration Harm reduction
Chemical purity
[ "Physics", "Chemistry", "Environmental_science" ]
343
[ "Adulteration", "Environmental chemistry", "Drug safety", "Chemical tests", "Materials", "nan", "Matter" ]
70,931,046
https://en.wikipedia.org/wiki/ZFK%20equation
ZFK equation, abbreviation for Zeldovich–Frank-Kamenetskii equation, is a reaction–diffusion equation that models premixed flame propagation. The equation is named after Yakov Zeldovich and David A. Frank-Kamenetskii who derived the equation in 1938 and is also known as the Nagumo equation. The equation is analogous to KPP equation except that is contains an exponential behaviour for the reaction term and it differs fundamentally from KPP equation with regards to the propagation velocity of the traveling wave. In non-dimensional form, the equation reads with a typical form for given by where is the non-dimensional dependent variable (typically temperature) and is the Zeldovich number. In the ZFK regime, . The equation reduces to Fisher's equation for and thus corresponds to KPP regime. The minimum propagation velocity (which is usually the long time asymptotic speed) of a traveling wave in the ZFK regime is given by whereas in the KPP regime, it is given by Traveling wave solution Similar to Fisher's equation, a traveling wave solution can be found for this problem. Suppose the wave to be traveling from right to left with a constant velocity , then in the coordinate attached to the wave, i.e., , the problem becomes steady. The ZFK equation reduces to satisfying the boundary conditions and . The boundary conditions are satisfied sufficiently smoothly so that the derivative also vanishes as . Since the equation is translationally invariant in the direction, an additional condition, say for example , can be used to fix the location of the wave. The speed of the wave is obtained as part of the solution, thus constituting a nonlinear eigenvalue problem. Numerical solution of the above equation, , the eigenvalue and the corresponding reaction term are shown in the figure, calculated for . Asymptotic solution The ZFK regime as is formally analyzed using activation energy asymptotics. Since is large, the term will make the reaction term practically zero, however that term will be non-negligible if . The reaction term will also vanish when and . Therefore, it is clear that is negligible everywhere except in a thin layer close to the right boundary . Thus the problem is split into three regions, an inner diffusive-reactive region flanked on either side by two outer convective-diffusive regions. Outer region The problem for outer region is given by The solution satisfying the condition is . This solution is also made to satisfy (an arbitrary choice) to fix the wave location somewhere in the domain because the problem is translationally invariant in the direction. As , the outer solution behaves like which in turn implies The solution satisfying the condition is . As , the outer solution behaves like and thus . We can see that although is continuous at , has a jump at . The transition between the derivatives is described by the inner region. Inner region In the inner region where , reaction term is no longer negligible. To investigate the inner layer structure, one introduces a stretched coordinate encompassing the point because that is where is approaching unity according to the outer solution and a stretched dependent variable according to Substituting these variables into the governing equation and collecting only the leading order terms, we obtain The boundary condition as comes from the local behaviour of the outer solution obtained earlier, which when we write in terms of the inner zone coordinate becomes and . Similarly, as . we find . The first integral of the above equation after imposing these boundary conditions becomes which implies . It is clear from the first integral, the wave speed square is proportional to integrated (with respect to ) value of (of course, in the large limit, only the inner zone contributes to this integral). The first integral after substituting is given by KPP–ZFK transition In the KPP regime, For the reaction term used here, the KPP speed that is applicable for is given by whereas in the ZFK regime, as we have seen above . Numerical integration of the equation for various values of showed that there exists a critical value such that only for , For , is greater than . As , approaches thereby approaching the ZFK regime. The region between the KPP regime and the ZFK regime is called the KPP–ZFK transition zone. The critical value depends on the reaction model, for example we obtain Clavin–Liñán model To predict the KPP–ZFK transition analytically, Paul Clavin and Amable Liñán proposed a simple piecewise linear model where and are constants. The KPP velocity of the model is , whereas the ZFK velocity is obtained as in the double limit and that mimics a sharp increase in the reaction near . For this model there exists a critical value such that See also Fisher's equation References Partial differential equations Combustion
ZFK equation
[ "Chemistry" ]
977
[ "Combustion" ]
70,933,720
https://en.wikipedia.org/wiki/Tolyltriazole
Tolyltriazole is a mixture of isomers or congeners that differ from benzotriazole by the addition of one methyl group attached somewhere on the benzene ring. "The term tolyltriazole (CAS 29385-43-1) generally [refers to] the commercial mixture composed of approximately equal amounts of 4- and 5-methylbenzotriazole, with small quantities of [their respective 7- and 6-methyl tautomers]". Structure Synthesis and reactions Synthesis is much like that of benzotriazole, but starting with methyl-o-phenylenediamine instead of o-phenylenediamine. Isomers of methyl-o-phenylenediamine include 3-methyl-o-phenylenediamine, 4-methyl-o-phenylenediamine, and N-methyl-o-phenylenediamine (not involved here). Applications Tolyltriazole has uses similar to benzotriazole, but has better solubility in some organic solvents. Corrosion inhibitor Environmental relevance Related compounds Hydroxybenzotriazole References Benzotriazoles Chelating agents Conservation and restoration materials Corrosion inhibitors
Tolyltriazole
[ "Physics", "Chemistry" ]
260
[ "Matter", "Process chemicals", "Conservation and restoration materials", "Organic compounds", "Materials", "Corrosion inhibitors", "Chelating agents", "Organic compound stubs", "Organic chemistry stubs" ]
70,939,631
https://en.wikipedia.org/wiki/Xenon%20isotope%20geochemistry
Xenon isotope geochemistry uses the abundance of xenon (Xe) isotopes and total xenon to investigate how Xe has been generated, transported, fractionated, and distributed in planetary systems. Xe has nine stable or very long-lived isotopes. Radiogenic 129Xe and fissiogenic 131,132,134,136Xe isotopes are of special interest in geochemical research. The radiogenic and fissiogenic properties can be used in deciphering the early chronology of Earth. Elemental Xe in the atmosphere is depleted and isotopically enriched in heavier isotopes relative to estimated solar abundances. The depletion and heavy isotopic enrichment can be explained by hydrodynamic escape to space that occurred in Earth's early atmosphere. Differences in the Xe isotope distribution between the deep mantle (from Ocean Island Basalts, or OIBs), shallower Mid-ocean Ridge Basalts (MORBs), and the atmosphere can be used to deduce Earth's history of formation and differentiation of the solid Earth into layers. Background Xe is the heaviest noble gas in the Earth's atmosphere. It has seven stable isotopes (126Xe,128Xe,129Xe,130Xe,131Xe, 132Xe, 134Xe) and two isotopes (124Xe, 136Xe) with long-lived half-lives. Xe has four synthetic radioisotopes with very short half-lives, usually less than one month. Xenon-129 can be used to examine the early history of the Earth. 129Xe was derived from the extinct nuclide of iodine, iodine-129 or 129I (with a half-life of 15.7 Million years, or Myr), which can be used in iodine-xenon (I-Xe) dating. The production of 129Xe stopped within about 100 Myr after the start of the Solar System because 129I became extinct. In the modern atmosphere, about 6.8% of atmospheric 129Xe originated from the decay 129I in the first ~100 Myr of the Solar System's history, i.e., during and immediately following Earth's accretion. Fissiogenic Xe isotopes were generated mainly from the extinct nuclide, plutonium-244 or 244Pu (half-life of 80 Myr), and also the extant nuclide, uranium-238 or 238U (half-life of 4468 Myr). Spontaneous fission of 238U has generated ~5% as much fissiogenic Xe as 244Pu. Pu and U fission produce the four fissiogenic isotopes, 136Xe, 134Xe, 132Xe, and 131Xe in distinct proportions. A reservoir that remains an entirely closed system over Earth's history has a ratio of Pu- to U-derived fissiogenic Xe reaching to ~27. Accordingly, the isotopic composition of the fissiogenic Xe for a closed-system reservoir would largely resemble that produced from pure 244Pu fission. Loss of Xe from a reservoir after 244Pu becomes extinct (500 Myr) would lead to a greater contribution of 238U fission to the fissiogenic Xe. Notation Differences in the abundance of isotopes among natural samples are extremely small (almost always below 0.1% or 1 per mille). Nevertheless, these very small differences can record meaningful geological processes. To compare these tiny but meaningful differences, isotope abundances in natural materials are often reported relative to isotope abundances in designated standards, with the delta (δ) notation. The absolute values of Xe isotopes are normalized to atmospheric 130Xe. Define where i = 124, 126, 128, 129, 131, 132, 134, 136. Applications The age of Earth Iodine-129 decays with a half-life of 15.7 Ma into 129Xe, resulting in excess 129Xe in primitive meteorites relative to primordial Xe isotopic compositions. The property of 129I can be used in radiometric chronology. However, as detailed below, the age of Earth's formation cannot be deduced directly from I-Xe dating. The major problem is the Xe closure time, or the time when the early Earth system stopped gaining substantial new material from space. When the Earth became closed for the I-Xe system, Xe isotope evolution began to obey a simple radioactive decay law as shown below and became predictable. The principle of radiogenic chronology is, if at time t1 the quantity of a radioisotope is P1 while at some previous time this quantity was P0, the interval between t1 and t0 is given by the law of radioactive decay as Here is the decay constant of the radioisotope, which is the probability of decay per nucleus per unit time. The decay constant is related to the half life t1/2, by t1/2= ln(2)/ Calculations The I-Xe system was first applied in 1975 to estimate the age of the Earth. For all Xe isotopes, the initial isotope composition of iodine in the Earth is given by where is the isotopic ratios of iodine at the time that Earth primarily formed, is the isotopic ratio of iodine at the end of stellar nucleosynthesis, and is the time interval between the end of stellar nucleosynthesis and the formation of the Earth. The estimated iodine-127 concentration in the Bulk Silicate Earth (BSE) (= crust + mantle average) ranges from 7 to 10 parts per billion (ppb) by mass. If the BSE represents Earth's chemical composition, the total 127I in the BSE ranges from 2.26×1017 to 3.23×1017 moles. The meteorite Bjurböle is 4.56 billion years old with an initial 129I/127I ratio of 1.1×10−4, so an equation can be derived as where is the interval between the formation of the Earth and the formation of meteorite Bjurböle. Given the half life of 129I of 15.7 Myr, and assuming that all the initial 129I has decayed to 129Xe, the following equation can be derived: 129Xe in the modern atmosphere is 3.63×1013 grams. The iodine content for BSE lies between 10 and 12 ppb by mass. Consequently, should be 108 Myr, i.e., the Xe-closure age is 108 Myr younger than the age of meteorite Bjurböle. The estimated Xe closure time was ~4.45 billion years ago when the growing Earth started to retain Xe in its atmosphere, which is coincident with ages derived from other geochronology dating methods. Xe closure age problem There are some disputes about using I-Xe dating to estimate the Xe closure time. First, in the early solar system, planetesimals collided and grew into larger bodies that accreted to form the Earth. But there could be a 107 to 108 years time gap in Xe closure time between the Earth's inner and outer regions. Some research support 4.45 Ga probably represents the time when the last giant impactor (Martian-size) hit Earth, but some regard it as the time of core-mantle differentiation. The second problem is that the total inventory of 129Xe on Earth may be larger than that of the atmosphere since the lower mantle hadn't been entirely mixed, which may underestimate 129Xe in the calculation. Last but not least, if Xe gas not been lost from the atmosphere during a long interval of early Earth's history, the chronology based on 129I-129Xe would need revising since 129Xe and 127Xe could be greatly altered. Loss of earth's earliest atmosphere Compared with solar xenon, Earth's atmospheric Xe is enriched in heavy isotopes by 3 to 4% per atomic mass unit (amu). However, the total abundance of xenon gas is depleted by one order of magnitude relative to other noble gases. The elemental depletion while relative enrichment in heavy isotopes is called the "Xenon paradox". A possible explanation is that some processes can specifically diminish xenon rather than other light noble gases (e.g. Krypton) and preferentially remove lighter Xe isotopes. In the last 2 decades, two categories of models have been proposed to solve the xenon paradox. The first assumes that the Earth accreted from porous planetesimals, and isotope fractionation happened due to gravitational separation. However, this model cannot reproduce the abundance and isotopic composition of light noble gases in the atmosphere. The second category supposes a massive impact resulted in an aerodynamic drag on heavier gases. Both the aerodynamic drag and the downward gravitational effect lead to a mass-dependent loss of Xe gases. But following research suggested that Xe isotope mass fractionation shouldn't be a rapid, single event. Research published since 2018 on noble gases preserved in Archean (3.5–3.0 Ga old) samples may provide a solution to the Xe paradox. Isotopically mass fractionated Xe is found in tiny inclusions of ancient seawater in Archean barite and hydrothermal quartz. The distribution of Xe isotopes lies between the primordial solar and the modern atmospheric Xe isotope patterns. The isotopic fractionation gradually increases relative to the solar distribution as Earth evolves over its first 2 billion years. This two billion-year history of evolving Xe fractionation coincides with early solar system conditions including high solar extreme ultraviolet (EUV) radiation and large impacts that could energize large rates of hydrogen escape to space that are big enough to drag out xenon. However, models of neutral xenon atoms escaping cannot resolve the problem that other lighter noble gas elements don't show the signal of depletion or mass-dependent fractionation. For example, because Kr is lighter than Xe, Kr should also have escaped in a neutral wind. Yet the isotopic distribution of atmospheric Kr on Earth is significantly less fractionated than atmospheric Xe. A current explanation is that hydrodynamic escape can preferentially remove lighter atmospheric species and lighter isotopes of Xe in the form of charged ions instead of neutral atoms. Hydrogen is liberated from hydrogen-bearing gases (H2 or CH4) by photolysis in the early Earth atmosphere. Hydrogen is light and can be abundant at the top of the atmosphere and escape. In the polar regions where there are open magnetic field lines, hydrogen ions can drag ionized Xe out from the atmosphere to space even though neutral Xe cannot escape. The mechanism is summarized as below. Xe can be directly photo-ionized by UV radiation in range of Or Xe can be ionized by change exchange with H2 and CO2 through where H+ and CO2+ can come from EUV dissociation. Xe+ is chemically inert in H, H2, or CO2 atmospheres. As a result, Xe+ tends to persist. These ions interact strongly with each other through the Coulomb force and are finally dragged away by strong ancient polar wind. Isotope mass fractionation accumulates as lighter isotopes of Xe+ preferentially escape from the Earth. A preliminary model suggests that Xe can escape in the Archean if the atmosphere contains >1% H2 or >0.5% methane. When O2 levels increased in the atmosphere, Xe+ could exchange positive charge with O2 though From this reaction, Xe escape stopped when the atmosphere became enriched in O2. As a result, Xe isotope fractionation may provide insights into the long history of hydrogen escape that ended with the Great Oxidation Event (GOE). Understanding Xe isotopes is promising to reconstruct hydrogen or methane escape history that irreversibly oxidized the Earth and drove biological evolution toward aerobic ecological systems. Other factors, such as the hydrogen (or methane) concentration becoming too low or EUV radiation from the aging Sun becoming too weak, can also cease the hydrodynamic escape of Xe, but are not mutually exclusive. Organic hazes on Archean Earth could also scavenge isotopically heavy Xe. Ionized Xe can be chemically incorporated into organic materials, going through the terrestrial weathering cycle on the surface. The trapped Xe is mass fractionated by about 1% per amu in heavier isotopes but they may be released again and recover the original unfractionated composition, making them not sufficient to totally resolve Xe paradox. Comparison between Kr and Xe in the atmosphere Observed atmospheric Xe is depleted relative to Chondritic meteorites by a factor of 4 to 20 when compared to Kr. In contrast, the stable isotopes of Kr are barely fractionated. This mechanism is unique to Xe since Kr+ ions are quickly neutralized via Therefore, Kr can be rapidly returned to neutral and wouldn't be dragged away by the charged ion wind in the polar region. Hence Kr is retained in the atmosphere. Relation with Mass Independent Fraction of Sulfur Isotopes (MIF-S) The signal of mass-independent fractionation of sulfur isotopes, known as MIF-S, correlates with the end of Xe isotope fractionation. During the Great Oxidation Event (GOE), the ozone layer formed when O2 rose, accounting for the end of the MIF-S signature. The disappearance of the MIF-S signal has been regarded as changing the redox ratio of Earth's surface reservoirs. However, potential memory effects of MIF-S due to oxidative weathering can lead to large uncertainty on the process and chronology of GOE. Compared to the MIF-S signals, hydrodynamic escape of Xe is not affected by the ozone formation and may be even more sensitive to O2 availability, promising to provide more details about the oxidation history of Earth. Xe Isotopes as mantle tracers Xe isotopes are also promising in tracing mantle dynamics in Earth's evolution. The first explicit recognition of non-atmospheric Xe in terrestrial samples came from the analysis of CO2-well gas in New Mexico, displaying an excess of 129I-derived or primitive source 129Xe and high content in 131-136Xe due to the decay of 238U. At present, the excess of 129Xe and 131-136Xe has been widely observed in mid-ocean ridge basalt (MORBs) and Oceanic island basalt (OIBs). Because 136Xe receives more fissiogenic contribution than other heavy Xe isotopes, 129Xe (decay of 129I) and 136Xe are usually normalized to 130Xe when discussing Xe isotope trends of different mantle sources. MORBs' 129Xe/130Xe and 136Xe/130Xe ratios lie on a trend from atmospheric ratios to higher values and seemingly contaminated by the air. Oceanic island basalt (OIBs) data lies lower than those in MORBs, implying different Xe sources for OIBs and MORBs. The deviations in 129Xe/130Xe ratio between air and MORBs show that mantle degassing occurred before 129I was extinct, otherwise 129Xe/130Xe in the air would be the same as in the mantle. The differences in the 129Xe/130Xe ratio between MORBs and OIBs may indicate that the mantle reservoirs are still not thoroughly mixed. The chemical differences between OIBs and MORBs still await discovery. To obtain mantle Xe isotope ratios, it is necessary to remove contamination by atmospheric Xe, which could start before 2.5 billion years ago. Theoretically, the many non-radiogenic isotopic ratios (124Xe/130Xe, 126Xe/130Xe, and 128Xe/130Xe) can be used to accurately correct for atmospheric contamination if slight differences between air and mantle can be precisely measured. Still, we cannot reach such precision with current techniques. Xe in other planets Mars On Mars, Xe isotopes in the present atmosphere are mass fractionated relative to their primordial composition from in situ measurement of the Curiosity Rover at Gale Crater, Mars. Paleo-atmospheric Xe trapped in the Martian regolith breccia NWA 11220 is mass-dependently fractionated relative to solar Xe by ~16.2‰. The extent of fractionation is comparable for Mars and Earth, which may be compelling evidence that hydrodynamic escape also occurred in the Mars history. The regolith breccia NWA7084 and the >4 Ga orthopyroxene ALH84001 Martian meteorites trap ancient Martian atmospheric gases with little if any Xe isotopic fractionation relative to modern Martian atmospheric Xe. Alternative models for Mars consider that the isotopic fractionation and escape of Mars atmospheric Xe occurred very early in the planet's history and ceased around a few hundred million years after planetary formation rather than continuing during its evolutionary history Venus Xe has not been detected in Venus's atmosphere. 132Xe has an upper limit of 10 parts per billion by volume. The absence of data on the abundance of Xe precludes us from evaluating if the abundance of Xe is close to solar values or if there is Xe paradox on Venus. The lack also prevents us from checking if the isotopic composition has been mass dependently fractionated, as in the case of Earth and Mars. Jupiter Jupiter's atmosphere has 2.5 ± 0.5 times the solar abundance values for Xenon and similarly elevated argon and krypton (2.1 ± 0.5 and 2.7 ± 0.5 times solar values separately). These signals of enrichment are due to these elements coming to Jupiter in very cold (T<30K) icy planetesimals. See also Geochronology Isotopes of Xenon References Xenon Isotopes Geochemistry
Xenon isotope geochemistry
[ "Physics", "Chemistry" ]
3,778
[ "nan", "Isotopes", "Nuclear physics" ]
75,234,508
https://en.wikipedia.org/wiki/ACR-PCR%20method
The Aircraft Classification Rating (ACR) - Pavement Classification Rating (PCR) method is a standardized international airport pavement rating system developed by ICAO in 2022. The method is scheduled to replace the ACN-PCN method as the official ICAO pavement rating system by November 28, 2024. The method uses similar concepts as the ACN-PCN method, however, the ACR-PCR method is based on layered elastic analysis, uses standard subgrade categories for both flexible and rigid pavement, and eliminates the use of alpha factor and layer equivalency factors. The method relies on the comparison of two numbers: The ACR, a number defined as two times the derived single wheel load (expressed in hundreds of kilograms) conveying the relative effect on an airplane of a given weight on a pavement structure for a specified standard subgrade strength; The PCR, a number (and series of letters) representing the pavement bearing strength (on the same scale as ACR) of a given pavement section (runway, taxiway, apron) for unrestricted operations. Aircraft Classification Rating (ACR) The ACR calculation process is fully described in ICAO Doc 9157 Aerodrome Design Manual – Part 3 "Pavements" (3rd ed.). The procedure to calculate the ACR is as such: Design a theoretical pavement according to a defined criterion: For flexible pavements, design the pavement for 36,500 passes of the aircraft according to the layered elastic analysis (LEA) design procedure For rigid pavements, design the pavement to resist a standard flexural stress of 2.75 MPa at the bottom of the cement concrete layer according to the LEA design procedure Calculate the single wheel load with a tire pressure of 1.50 MPa that would require the same pavement structural cross-section, this is the Derived Single Wheel Load (DSWL) The ACR is defined as twice the DSWL, expressed in hundreds of kilograms ACRs are calculated for four standard subgrade strengths, for flexible and rigid pavements, thus leading to 8 different values. ACRs depend on the landing gear geometry (number of wheels and wheel spacing), the landing gear load (that is dependent upon the aircraft weight and center of gravity) and the tire pressure. Pavement Classification Rating (PCR) As opposed to ACR, the ICAO Aerodrome Design Manual does not prescribe a standardized calculation procedure for the PCR; however, ICAO does require an airport authority to use the cumulative damage factor (CDF) concept to determine PCR. The CDF is the amount of structural fatigue life of a pavement that has been used up. The CDF is expressed as the ratio of applied load repetitions to allowable load repetitions to failure. Damage from multiple aircraft types can be accounted for by summing the CDF for each aircraft in the traffic mix in the application of Miner's rule for the prediction of fatigue life in structures. ICAO defines a standardized reporting format for the PCR that comprises the PCR numerical value and a series of 4 letters. The ICAO Aerodrome Design Manual contains example calculations for a technical evaluation of PCR with the French pavement design procedure using French material specifications and with the FAA pavement design procedure using standard material specifications found in the United States. References Pavement engineering Airport infrastructure
ACR-PCR method
[ "Engineering" ]
662
[ "Airport infrastructure", "Aerospace engineering" ]
75,235,069
https://en.wikipedia.org/wiki/Weissert%20Event
The Weissert Event, also referred to as the Weissert Thermal Excursion (WTX), was a hyperthermal event that occurred in the Valanginian stage of the Early Cretaceous epoch. This thermal excursion occurred amidst the relatively cool Tithonian-early Barremian Cool Interval (TEBCI). Its termination marked an intense cooling event, potentially even an ice age. Duration The start of the WTX has been astrochronologically dated by one study to 134.50 ± 0.19 million years ago (Ma), with its positive δ13C excursion being found to last until 133.96 ± 0.19 Ma and the plateau phase of elevated δ13C values until 132.44 ± 0.19 Ma. However, astrochronological studies of sediments in the Vocontian Basin have yielded a duration of 2.08 Myr, with the positive δ13C excursion being 0.94 Myr in duration and the δ13C plateau being 1.14 Myr. A different study concludes the WTX lasted for about 1.4 million years (Myr) based on the chronological length of the high δ13C plateau observed over its course in the Bersek Marl Formation of Hungary. Causes An addition of carbon dioxide into the atmosphere via the activity of the Paraná-Etendeka Large Igneous Province (PE-LIP) is generally accepted as the leading candidate for what sparked the WTX, although this is not universally accepted, with some reconstructed geochronologies showing a lack of causality between the emplacement of the PE-LIP and the onset of the WTX. The prolonged, drawn out manner in which the PE-LIP erupted has been brought up as a further argument against its emplacement as the driving perturbation causing the WTX. Effects The WTX resulted in a rapid global temperature increase during the otherwise mild TEBCI. The sharp jump in global temperatures during this hyperthermal event was accompanied by oceanic anoxia. However, unlike other oceanic anoxic events, the WTX is not associated with widespread black shale deposits. Nannoconids experienced a decline at the onset of the WTX resulting from marine regression, but bloomed in abundance later on in the event as ocean productivity skyrocketed. In the Vocontian Basin, the WTX is associated with an increase in marlstones. At the end of the WTX, temperatures cooled by ~1–2 °C based on the results of palaeothermometry done in southern France, whereas the Boreal Ocean and its surroundings cooled by as much as 4 °C. Geochemical records of 187Os/188Os point to an increase in unradiogenic osmium flux into the ocean, suggesting the occurrence of silicate weathering of PE-LIP basalts during this slice of time, providing the most likely explanation for the temperature drop. Some studies have suggested that a transient ice age with limited but significant polar ice caps occurred in the aftermath of the WTX, although the lack of a positive δ18Oseawater excursion during the latest Valanginian interval of cooling and the presence instead of a very slightly negative excursion calls into question the existence of any significant polar ice growth. References Anoxic events Valanginian Stage
Weissert Event
[ "Chemistry" ]
680
[ "Chemical oceanography", "Anoxic events" ]
75,248,449
https://en.wikipedia.org/wiki/Sequential%20infiltration%20synthesis
Sequential infiltration synthesis (SIS) is a technique derived from atomic layer deposition (ALD) in which a polymer is infused with inorganic material using sequential, self-limiting exposures to gaseous precursors, enabling precise manipulation over the composition, structure, and properties. The technique has applications in fields such as nanotechnology, materials science, and electronics, where precise material engineering is required. This synthesis uses metal-organic vapor-phase precursors and co-reactants that dissolve and diffuse into polymers. These precursors interact with the functional groups of the polymers through reversible complex formation or irreversible chemical reactions, resulting in composite materials that can exhibit nano-structured properties. The metal-organic precursor (A) and co-reactant vapor (B) are supplied in an alternating ABAB sequence. Following SIS, the organic phase may be removed thermally or chemically to leave only the inorganic components behind. This approach facilitates the fabrication of materials with controlled properties such as composition, stylometric, porosity, conductivity, refractive index, and chemical functionality on the nano-scale. SIS has been utilized in fields, including electronics, energy storage, AI, and catalysis, for its ability to modify material properties. SIS is sometimes referred to as "multiple pulsed vapor-phase infiltration" (MPI), "vapor phase infiltration" (VPI) or "sequential vapor infiltration" (SVI). SIS involves the 3D distribution of functional groups in polymers, while its predecessor, ALD, is associated with the two-dimensional distribution of reactive sites on solid surfaces. In SIS, the partial pressures and exposure times for the precursor pulse are typically larger compared to ALD to ensure adequate infiltration of the precursor into the three-dimensional polymer volume through dissolution and diffusion. The process relies on the diffusive transport of precursors within polymers, with the resulting distribution influenced by time, pressure, temperature, polymer chemistry, and micro-structure. History The diffusion of precursors below the surfaces of polymers during ALD was observed in 2005 by the Steven M. George group when they observed that polymers could uptake trimethylaluminium (TMA) via absorption within their free volume. In the study, the interactions between the ALD precursors and the polymer functional groups were not recognized, and the diffusion of precursors into polymer films was considered a problem. Hence, the diffusion and reactions of ALD precursors into polymer films were considered challenges to address rather than opportunities. However, potential benefits of these phenomena were demonstrated by Knez and coworkers in a 2009 report describing the increased toughness of spider silk following vapor-phase infiltration. Sequential infiltration synthesis (SIS) was developed by Argonne National Laboratory scientists Jeffrey Elam and Seth Darling in 2010 to synthesize nanoscopic materials starting from block copolymer templates. A patent application was filed in 2011 and the first patent was issued in 2016. SIS involves vapour diffusing into an existing polymer and chemically or physically binding to it. This results in the growth and formation of inorganic structures by selective nucleation throughout the bulk polymer. With SIS, the shapes of various inorganic materials can be tailored by applying their precursor chemistries to patterned or nano-structured organic polymers, such as block copolymers. SIS was developed to intentionally enable the infusion of inorganic materials such as metal oxides and metals within polymers to yield hybrid materials with enhanced properties. Hybrid materials created via SIS can further be subjected to thermal annealing steps to remove the polymer constituents entirely to derive purely inorganic materials that maintain the structure of the original polymer morphology, including mesoporosity. Although the early research in SIS focused on a small number of inorganic materials such as Al2O3, TiO2, and ZnO, the technology diversified over the next decade and came to include a wide variety of both inorganic materials and organic polymers, as detailed in reviews. Principles and process SIS is based on the consecutive introduction of different precursors into a polymer, taking advantage of the material's porosity on the molecular scale. This allows the precursors to diffuse into the material and react with specific functional groups located along the polymer backbone or pendant group. Through the selection and combination of the precursors, a rich variety of materials can be synthesized, each of which can endow unique properties to the material. The process of SIS involves various key steps, the first of which is materials selection. A suitable substrate material, such as a polymer film, and precursors, typically molecules that can react with the substrate's functional groups, are used for the infiltration synthesis. The pairing of polymer chemistry and precursor species is vital for acquiring the desired fictionalization and modification. The substrate is placed in a reactor with an inert atmosphere (typically an inert gas or vacuum). The first precursor vapor (e.g., trimethylaluminum, TMA) is introduced at a sufficiently high vapor pressure and duration such that the precursor molecules diffuse into the substrate. Thus the precursor infiltrates the material and then reacts with the interior functional groups. After a suitable diffusion/reaction time, the reactor is purged with inert gas or evacuated to remove reaction byproducts and UN-reacted precursors. A second vapor-phase species, often a co-reactant, such as H2O, is introduced. Again, the precursor partial pressure and exposure time are selected to allow sufficient time and thermodynamic driving force for diffusion into the polymer and reaction with the functional groups left by the first precursor exposure. The second precursor is then purged or evacuated to complete the first SIS cycle. The second precursor may also create new functional groups for reaction with the first precursor for subsequent SIS cycles. Sequential infiltration steps can then be repeated using the same or different precursor species until the desired modifications are achieved. When the desired infiltrations are achieved, the modified material can undergo further post-treatment steps to enhance the modified layers' properties, including stability. Post-treatment may include heating, chemical treatment, or oxidation to remove the organic polymer. With SIS it is natural to apply to block co-polymer substrates. Block co-polymers such as polystyrene-block-poly(methyl methotrexate), PS-b-PMMA, can spontaneously undergo micro-phase separation to form a rich variety of periodic mesoscale patterns. If the SIS precursors are selected to react with just one of the BCP components but not with the second component, then the inorganic material will only nucleate and grow in that component. For instance, TMA will react with the PMMA side chains of PS-b-PMMA but not with the PS side chains. Consequently, SIS using TMA and H2O as precursor vapors to infiltrate a PS-b-PMMA micro-phase-separated substrate will form Al2O3 specifically in the PMMA-enriched micro-phase subdomains. Subsequent removal of the PS-b-PMMA by using oxygen plasma or by annealing in air will convert the combined organic and inorganic mesoscale pattern into a purely inorganic Al2O3 pattern that shares the mesoscale structure of the block copolymer but is more chemically and thermally robust. Applications Lithography SIS is capable of enhancing etch resistance in lithographic photo-resist, such as those used in photo-lithography, micro-fabrication, and nano-lithography. This method involves the sequential deposition of inorganic materials within a patterned resist's micro/nano-structures. By controlling the infiltration of these materials, SIS can engineer the chemical composition and density of the resist, thus enhancing its resistance to common etching processes. This allows for finer feature patterns and increased durability in micro-fabrication, which has advanced the capabilities of semiconductor manufacturing and nanotechnology applications. Another recent application for SIS in lithography is to enhance the optical absorption of the photo-resist in the extreme ultraviolet range to improve EUV lithography. Surface coatings SIS has applications in the field of surface coatings, particularly in the development of coatings with specific functional properties. With the sequential infiltration of different precursors into the material, SIS allows for the creation of coatings with enhanced properties and performance such as durability, corrosion resistance, eosinophilic,Lipophilicity, anti-reflection, and/or improved adhesion to substrates. Such an application of SIS can be used for protective coatings for metals, anti-fouling coatings for biomedical applications, and coatings for optical and electronic devices. In this application, the diffusion and reaction of the SIS precursors below the polymer surface facilitate a bulk-like transformation such that the effective thickness of the surface coating (e.g., several microns) is much larger than the film thickness that would result using the same number of atomic layer deposition (ALD) cycles on a conventional, dense substrate (e.g., a few nanometers). Sensors and actuators SIS, with its precise control over material properties, can be used to develop sensors and actuators. The functional layers created through the selective infiltration of specific precursors can enhance the sensitivity, selectivity, and response of sensors, which have applications in gas sensing, chemical sensing, biosensing, and environmental monitoring. SIS is also sued to engineer actuators with tunable properties, as it allows for the creation of devices on the micro and nano scales. Energy devices SIS has also shown promise in energy devices, especially in improving the performance and stability of energy storage and conversion systems. Employing SIS and the correct precursors, the technique can modify the surfaces and interfaces of materials used in batteries, super-capacitors, and fuel cells, enhancing charge transport, electrochemical stability, and energy density. SIS is also being explored for its applications in photovoltaics, in which it can be used to engineer interfaces and increase light absorption. Biomedicine SIS is a tool for surface modifications to improve bio-compatibility, bio-activity, and controlled drug release, making it useful in some biomedical applications. Polymers and radioactive macro-molecules treated with SIS can obtain coatings with developed cell adhesion and reduced bacterial adhesion, as well as provide a medium for the controlled release of therapeutics. Such properties are applicable in biomedicine, such as implantable medical devices, tissue engineering, and drug delivery systems. Bio-materials Modifying the mechanical properties of proteins is an early example of SIS application. For spider dragline silk, the toughness characteristic was significantly enhanced when metallic impurities, such as titanium or aluminum, infiltrated the fibers. This fiber doping using SIS techniques attempts to mimic the effect of metallic impurities on silk properties observed in nature. Limitations One of the main challenges of SIS is the need to perform the process in an inert environment. Creation of a vacuum and/or introduction of inert gas carries costs that may be prohibitive for applications. A second challenge lies in the inherent complexity of the diffusion-reaction process. The specifics of reactor configuration and process parameters significantly influence the final material properties, complicating process optimization, reproducibility, and scalability. While SIS is versatile and applicable to a broad range of materials, not all materials are compatible with this technique. The relatively slow diffusion rate of SIS precursor vapors through polymers can make the process time-intensive, particularly over macroscopic distances. For example, infiltrating millimeter-scale depths into a polymer may necessitate precursor exposure times of several hours. For comparison, ALD of thin films on dense surfaces that do not involve diffusion into the substrate would require exposure times of <1 s using the same precursors. References Thin film deposition
Sequential infiltration synthesis
[ "Chemistry", "Materials_science", "Mathematics" ]
2,415
[ "Thin film deposition", "Coatings", "Thin films", "Planes (geometry)", "Solid state engineering" ]
75,250,158
https://en.wikipedia.org/wiki/Earthscraper
An earthscraper is a building that provides multiple stories of permanent space below ground where people may live: the inverse of very tall high-rise buildings. Though humans have been building structures underground for centuries, such dwellings are generally called Earth shelters, and typically are only one or two stories deep at most. It is the number or depth of below ground stories that distinguish an earthscraper. An earthscraper might have some exposed sides, such as one built in a quarry with open exposure on some sides for lighting or ventilation purposes. Definition The term "earthscraper" was first applied to buildings that had continuously habitable space, as measured in stories, below ground, though no clear number of stories has been applied to the word. The word does not refer to, or count, the very deep foundations that are often required of skyscrapers in order to anchor and balance such tall structures—such as the Shanghai Tower which has foundations deep. Deep parking garages, defensive bunkers, shelters, or buildings other than habitable structures designed with the same sort of purpose as a skyscraper, are not considered earthscrapers. History The first known earthscraper that was both proposed and then subsequently completed was the InterContinental Shanghai Wonderland. This property was first unveiled in 2013, experienced significant delays initially due to the novel nature of its construction, but then finally was completed in 2018. This hotel earthscraper property has 16 underground stories, and two additional stories aboveground, making it 18 stories in total. This design presents opportunities for developers to transform potentially unappealing landmasses, such as an old, abandoned quarry in the case of the Intercontinental Shanghai, and turn them into useful, productive, or aesthetically appealing projects. Earthscrapers have also been thought of as a way to deal with urban planning issues such as overcrowding, historically the notion of "building up" was thought of as the solution when space was scarce and at a premium, however neighborhood externalities such as a tall building casting shade over other previously existing properties arise, issues which may not be problems with an earthscraper. Proposed earthscrapers A 65-story deep earthscraper was proposed in 2011 to be built in Mexico City's central plaza, a region called "Zócalo", though as of 2023 no such earthscraper has been completed. Environmental impact compared to skyscrapers Earthscrapers have been proposed as a means to deal with the effects of climate change, and to make human living less harmful on the external environment. This may be different from skyscrapers, which some critics allege are not good for the environment or for climate change. Some of the reasons that earthscrapers might be considered an improved option for large-scale human dwellings in urban environments over skyscrapers include the massively reduced cost of heating, or cooling, a large structure that is built mostly underground. Also, the amount of steel required in a skyscraper is enormous due to it needing to support its own weight, something an earthscraper does not need to do. Though an earthscraper still would still require large amounts of steel and concrete, it also has the support of the surrounding earth upon which the outer walls and frame can rest. See also Groundscraper Earth shelter Underground living Fallout shelter Seascraper Hobbit hole References Structural engineering Structural system Building types
Earthscraper
[ "Technology", "Engineering" ]
691
[ "Structural engineering", "Building engineering", "Structural system", "Construction", "Civil engineering" ]
75,250,402
https://en.wikipedia.org/wiki/Resigratinib
Resigratinib (KIN-3248) is an experimental anticancer medication which acts as a fibroblast growth factor receptor inhibitor (FGFRi) and is in early stage human clinical trials. See also Enbezotinib Pralsetinib Rebecsinib Selpercatinib Zeteletinib References Enzyme inhibitors Experimental monoclonal antibodies Benzimidazoles Pyrazolecarboxamides Pyrrolidines Enols Methoxy compounds Secondary amines Fluoroarenes Cyclopropyl compounds
Resigratinib
[ "Chemistry" ]
119
[ "Enols", "Pharmacology", "Functional groups", "Medicinal chemistry stubs", "Pharmacology stubs" ]
66,541,220
https://en.wikipedia.org/wiki/Bohlmann%E2%80%93Rahtz%20pyridine%20synthesis
In organic chemistry, the Bohlmann–Rahtz pyridine synthesis is a reaction that generates substituted pyridines in two steps, first a condensation reaction between an enamine and an ethynylketone to form an aminodiene intermediate, which after heat-induced E/Z isomerization undergoes a cyclodehydration to yield 2,3,6-trisubstituted pyridines. References Name reactions Pyridine forming reactions
Bohlmann–Rahtz pyridine synthesis
[ "Chemistry" ]
103
[ "Name reactions", "Chemical reaction stubs", "Ring forming reactions", "Organic reactions" ]
66,541,277
https://en.wikipedia.org/wiki/Sack%E2%80%93Schamel%20equation
The Sack–Schamel equation describes the nonlinear evolution of the cold ion fluid in a two-component plasma under the influence of a self-organized electric field. It is a partial differential equation of second order in time and space formulated in Lagrangian coordinates. The dynamics described by the equation take place on an ionic time scale, which allows electrons to be treated as if they were in equilibrium and described by an isothermal Boltzmann distribution. Supplemented by suitable boundary conditions, it describes the entire configuration space of possible events the ion fluid is capable of, both globally and locally. The equation The Sack–Schamel equation is in its simplest form, namely for isothermal electrons, given by is therein the specific volume of the ion fluid, the Lagrangian mass variable and t the time (see the following text). Derivation and application We treat, as an example, the plasma expansion into vacuum, i.e. a plasma that is confined initially in a half-space and is released at t=0 to occupy in course of time the second half. The dynamics of such a two-component plasma, consisting of isothermal Botzmann-like electrons and a cold ion fluid, is governed by the ion equations of continuity and momentum, and , respectively. Both species are thereby coupled through the self-organized electric field , which satisfies Poisson's equation, . Supplemented by suitable initial and boundary conditions (b.c.s), they represent a self-consistent, intrinsically closed set of equations that represent the laminar ion flow in its full pattern on the ion time scale. Figs. 1a, 1b show an example of a typical evolution. Fig. 1a shows the ion density in x-space for different discrete times, Fig. 1b a small section of the density front. Most notable is the appearance of a spiky ion front associated with the collapse of density at a certain point in space-time . Here, the quantity becomes zero. This event is known as "wave breaking" by analogy with a similar phenomenon that occurs with water waves approaching a beach. This result is obtained by a Lagrange numerical scheme, in which the Euler coordinates are replaced by Lagrange coordinates , and by so-called open b.c.s, which are formulated by differential equations of the first order. This transformation is provided by , , where is the Lagrangian mass variable. The inverse transformation is given by and it holds the identity: . With this identity we get through an x-derivation or . In the second step the definition of the mass variable was used which is constant along the trajectory of a fluid element: . This follows from the definition of , from the continuity equation and from the replacement of by . Hence . The velocity of a fluid element coincides with the local fluid velocity. It immediately follows: where the momentum equation has been used as well as , which follows from the definition of and from . Replacing by we get from Poisson's equation: . Hence . Finally, replacing in the expression we get the desired equation:. Here is a function of : and for convenience we may replace by . Further details on this transition from one to the other coordinate system can be found in. Note its unusual character because of the implicit occurrence of . Physically V represents the specific volume. It is equivalent with the Jacobian J of the transformation from Eulerian to Lagrangian coordinates since it holds Wave-breaking solution An analytical, global solution of the Sack–Schamel equation is generally not available. The same holds for the plasma expansion problem. This means that the data for the collapse cannot be predicted, but have to be taken from the numerical solution. Nonetheless, it is possible, locally in space and time, to obtain a solution to the equation. This is presented in detail in Sect.6 "Theory of bunching and wave breaking in ion dynamics" of. The solution can be found in equation (6.37) and reads for small and t where are constants and stand for . The collapse is hence at . is V-shaped in and its minimum moves linearly with towards the zero point (see Fig. 7 of ). This means that the density n diverges at when we return to the original Lagrangian variables. It is easily seen that the slope of the velocity, , diverges as well when . In the final collapse phase, the Sack–Schamel equation transits into the quasi-neutral scalar wave equation: and the ion dynamics obeys Euler's simple wave equation: . Generalization A generalization is achieved by allowing different equations of state for the electrons. Assuming a polytropic equation of state, or with : where refers to isothermal electrons, we get (see again Sect. 6 of ): The limitation of results from the demand that at infinity the electron density should vanish (for the expansion into vacuum problem). For more details, see Sect. 2: "The plasma expansion model" of or more explicitly Sect. 2.2: "Constraints on the electron dynamics". Fast Ion Bunching These results are in two respects remarkable. The collapse, which could be resolved analytically by the Sack–Schamel equation, signalizes through its singularity the absence of real physics. A real plasma can continue in at least two ways. Either it enters into the kinetic collsionless Vlasov regime and develops multi-streaming and folding effects in phase space or it experiences dissipation (e.g. through Navier-Stokes viscosity in the momentum equation ) which controls furtheron the evolution in the subsequent phase. As a consequence the ion density peak saturates and continues its acceleration into vacuum maintaining its spiky nature. This phenomenon of fast ion bunching being recognized by its spiky fast ion front has received immense attention in the recent past in several fields. High-energy ion jets are of importance and promising in applications such as in the laser-plasma interaction, in the laser irradiation of solid targets, being also referred to as target normal sheath acceleration, in future plasma based particle accelerators and radiation sources (e.g. for tumor therapy) and in space plasmas. Fast ion bunches are hence a relic of wave breaking that is analytically completely described by the Sack–Schamel equation. (For more details especially about the spiky nature of the fast ion front in case of dissipation see http://www.hans-schamel.de or the original papers ). An article in which the Sack-Schamel's wave breaking mechanism is mentioned as the origin of a peak ion front was published e.g. by Beck and Pantellini (2009). Finally, the notability of the Sack–Schamel equation is clarified through a recently published molecular dynamics simulation. In the early phase of the plasma expansion a distinct ion peak could be observed, emphasizing the importance of the wave breaking scenario as predicted by the equation. References External links www.hans-schamel.de further information by Hans Schamel Partial differential equations Laser applications Plasma physics equations
Sack–Schamel equation
[ "Physics" ]
1,460
[ "Equations of physics", "Plasma physics equations" ]
66,544,639
https://en.wikipedia.org/wiki/Sequence%20analysis%20of%20synthetic%20polymers
The methods for sequence analysis of synthetic polymers differ from the sequence analysis of biopolymers (e. g. DNA or proteins). Synthetic polymers are produced by chain-growth or step-growth polymerization and show thereby polydispersity, whereas biopolymers are synthesized by complex template-based mechanisms and are sequence-defined and monodisperse. Synthetic polymers are a mixture of macromolecules of different length and sequence and are analysed via statistical measures (e. g. the degree of polymerization, comonomer composition or dyad and triad fractions). NMR-based sequencing Nuclear magnetic resonance (NMR) spectroscopy is known as the most widely applied and “one of the most powerful techniques” for the sequence analysis of synthetic copolymers.⁠ NMR spectroscopy allows determination of the relative abundance of comonomer sequences at the level of dyads and in cases of small repeat units even triads or more. It also allows the detection and quantification of chain defects and chain end groups, cyclic oligomers and by-products.⁠ However, limitations of NMR spectroscopy are that it cannot, so far, provide information about the sequence distribution along the chain, like gradients, clusters or a long-range order. Example: Copolymer of PET and PEN Monitoring the relative abundance of comonomer sequences is a common technique and is used, for example, to observe the progress of transesterification reactions between polyethylene terephthalate (PET) and polyethylene naphthalate (PEN) in their blends. During such a transesterification reaction, three resonances representing four diads can be distinguished via 1H NMR spectroscopy by different chemical shifts of the oxyethylene units: The diads -terephthalate-oxyethylene-terephthalate- (TET) and -naphthalate-oxyethylene-naphthalate- (NEN), which are also present in the homopolymers polyethylene naphthalate und polyethylene terephthalate, as well as the (indistinguishable) diads -terephthalate-oxyethylene-naphthalate- (TEN) and -naphthalate-oxyethylene-terephthalate- (NET), which are exclusively present in the copolymer. In the spectrum of a 1:1 physical PET/PEN mixture, only the resonances corresponding to the diads TET and NEN are present at 4.90 and 5.00 ppm, respectively. Once a transesterification reaction occurs, a new resonance at 4.95 ppm emerges that increases in intensity with the reaction time, corresponding to the TEN / NET sequences. The example of polyethylene naphthalate and polyethylene terephthalate is relatively simple, as only the aromatic part of the polymers differ (naphthalate vs. terephthalate). In a blend of polyethylene naphthalate and polytrimethylene terephthalate, already six resonances can be distinguished, since both, oxyethylene and oxypropylene, form three resonances. The sequence patterns can become even more complex, when triads can be distinguished spectroscopically.⁠ The extractable information is limited by the difference in chemical shift and the resonance width. In addition to 1H NMR spectroscopy, also 13C NMR spectroscopy is a common method for the sequencing shown above, which is characterized in particular by a very narrow resonance width. Deconvolution and assignment of these triad-based resonances allows a quantitative determination of the degree of randomness and the average block length via integration of the distinguishable resonances. In a 1:1 mixture of two linear two-component 1:1 polycondensates (A1B1)n and (A2B2)n (with molecular weight high enough to neglected chain-ends), the following two equations are valid: [ Ai] = [Bi], wherein (i = 1,2) (1) [ A1B2 ] = [ A2B1] (2) Equation 1 states that the molar ratio of all four repeat units is identical and equation 2 states that both types of copolymer are of identical concentration. In this case, the degree of randomness χ is calculated as given by equation 3: , wherein (i, j = 1, 2) (3) In the beginning of a transreaction process (e. g. transesterification or transamidation), the degree of randomness χ ≈ 0 as the system comprises a physical mixture of homopolymers or block copolymers. During the transreaction process χ increases up to χ = 1 for a fully random copolymer. If χ > 1 it indicates a tendency of the monomers to form alternating structure, up to χ = 2 for a completely alternating copolymer.⁠ The degree of randomness χ gives thereby statistical information about the polymer sequence. The calculation can be modified for three-component⁠ and four-component⁠ polycondensates. Application NMR spectroscopy is used in industrially relevant systems to study the sequence distribution of copolymers or the occurrence of transesterification in polyester blends. A change in sequence distribution can effect the crystallinity, and transesterification can affect the compatibility of two otherwise incompatible polyesters. Depending on their degree of randomness, copolyesters can show different thermal transitions and behaviours. Other sequencing Other options besides traditional NMR spectroscopy for sequence analysis are listed here; these include Kerr-effect for characterization of polymer microstructures, MALDI-TOF mass spectrometry, depolymerization (controlled chemical degradation of macromolecules) via chain-end depolymerization (i.e., unzipping) and nanopore analysis (most of such reported studies, however, have focused on poly(ethylene glycol), PEG). References Polymer chemistry
Sequence analysis of synthetic polymers
[ "Chemistry", "Materials_science", "Engineering" ]
1,275
[ "Materials science", "Polymer chemistry" ]
66,546,481
https://en.wikipedia.org/wiki/HALO%20Urban%20Regeneration
HALO Urban Regeneration is a Scottish business innovation park, urban regeneration and business start-up support company, founded, based and headquartered in Kilmarnock, East Ayrshire, Scotland. The HALO Urban Regeneration was founded by entrepreneur Marie Macklin CBE in 2006 as HALO Urban Regeneration Company Ltd., having announced the project a few years prior to official funding and creation of the HALO Kilmarnock. The HALO building on Hill Street, Kilmarnock, is a £63 million brownfield urban regeneration project constructed on a 23-acre site, formerly the home of Johnnie Walker, the world’s leading Scotch whisky brand that was founded in Kilmarnock in 1820 and operated on the site until Diageo closed the Kilmarnock plant in 2012. HALO is projected to generate £205 million to the Economy of Scotland and stimulate 1,500 jobs. History HALO Scotland Following the 2009 announcement of the closure of the Johnnie Walker whisky bottling unit and production factory in Kilmarnock, owners Diageo began seeking proposals for future leasing of the Hill Street site that occupied that 32-acre site at that time. Diageo gifted eight acres to Kilmarnock College (a campus of Ayrshire College since 2013) in 2012 to allow the construction of a new multi-million pound campus to replace the ageing building that was constructed during the 1960s. Marie Macklin CBE, Chief Executive of The KLIN Group at the time, who had already undertaken numerous projects in and around Kilmarnock to restore derelict buildings in the town centre, submitted a proposal for a new, innovative hub to provide office space for startup companies and opportunities to enhance Kilmarnock's urban regeneration work. A planning application for permission for construction work on the new project was submitted to the planning advisory board of East Ayrshire Council, was planning permission granted by the council in 2018. The cost of the development was estimated to be £65 million, with the Scottish Government announcing a £5.3 million investment in the HALO Project in August 2017, with £1.8 million to be focused on low carbon emissions which was ultimately unused. Morrison Construction was appointed as main contractors for the construction of the complex in September 2019, with construction work initially scheduled to be completed by January 2021, however this was delayed as a result of the halt on construction works in Scotland due to the COVID-19 pandemic and phase one opened March 2022 The HALO Urban Regeneration benefitted as part of the Ayrshire Growth Deal, an economic recovery agreement between the Scottish Government, UK Government and the councils of East Ayrshire, North Ayrshire and South Ayrshire, with the Scottish Government and UK Government both providing £3.5 million of investment for the company and the regeneration of the former Johnnie Walker site. Diageo who owned the land when occupied by the former Johnnie Walker bottling and production plant facility donated the land for a cost of £1 and under the Ayrshire Growth Deal has been committed to a contribution of £2 million to support planning and design of the HALO development as well as long-term sustainability of the Hill Street site as a consequence for closing the Johnnie Walker facility. The building in Kilmarnock is a 4-storey mixed used structure and constructed from three main materials - dark brick, curtain wall glazing and a perforated aluminium screen at roof level, features a round LED dome on the roof which illuminates at nighttime. Phase 1 of the complex was completed in July 2021 (the HALO Enterprise & Innovation Hub), with future phases of the sites development consisting of a series of live and work units, a leisure facility, nursery, and over 200 houses. The building has become a symbol of regeneration in Kilmarnock, both in terms of redevelopment of land as well as economic regeneration and recovery. Expansion A second HALO project is scheduled to begin planning and construction in Northern Ireland. The HALO is scheduled to begin planning and construction of new premises to focus on urban regeneration projects in both Wales and England. The timescale for completion on these projects in England, Wales and Northern Ireland have still to be announced. Services Key services The Halo Urban Regeneration is founded with a particular focus on: Urban regeneration of Kilmarnock and Ayrshire Providing business and financial support for small startup businesses Office space for companies Residential houses Live Work Studios (#RockMe) Education and employment programmes Urban Park, including entrepreneurial businesses from computer technology, cyber research, engineering, fashion, financial services and light manufacturing A Fashion Foundry for small businesses designing, producing and retailing fashionwear and providing training skills for the new digital age will complement the digital retail boutique shopping arcade. A Children's Innovation Centre will engage with young people of all ages, from eco-nursery through to higher education, working in partnership with local schools, colleges and universities. Leisure and community amenities will include a WAVE Surf water feature built to Olympic training standards, as well as a skateboard park and other activity areas for all ages. Business intentions The HALO Urban Regeneration has a particular focus on, and intention to: Provide flexible, affordable workspace in an inspiring environment for entrepreneurs with spin-out, start-up and step-up businesses. Produce an informed and skilled supply of employment-ready young people to sharpen the technical and commercial competitive edges of businesses, especially in the retail and business service sectors. Create and sustain leading-edge learning facilities and opportunities to support widening access and inclusive growth for all communities, but particularly raising aspirations of children and young adults. Business partnerships The HALO Urban Regeneration has established business partnerships with various companies and organisations to support the business in its key business strategy. Most notably, HALO Urban Regeneration focuses on education and youth employment opportunities and includes partners such as: East Ayrshire Council The Scottish Government The UK Government Diageo Scottish Power CGI Onecom Scottish Business Resilience Centre Barclays A business partnership in association with Scottish Power makes the energy company the main HALO Platinum Partner and Sponsor. Scottish Power launched a £5 million, five-year programme, with a focus on building and enhancing the companies focus and vision of “Utility of the Future” vision. The Halo Urban Regeneration claim that the company will be a leader in the HALO Innovation and Enterprise Centre and the Digital and Cyber Zone. HALO and Scottish Power have committed to working together to create a cyber and digital training and learning facility, at the forefront of the ‘Fourth Industrial Revolution’. The HALO Urban Regeneration developed a partnership with Barclays in order to enhance HALO's employability initiatives for individuals in Ayrshire, seeks to eradicate barriers those facing unemployment of any age many experience, through Barclays LifeSkills, allowing individuals to access education and digital technology. Additionally, the Barclays-HALO partnership seeks to help start-up and scale-up entrepreneurial businesses to capitalise on growth opportunities, as well as enhancing connectivity and collaboration with other businesses locally and across the UK. In 2019, Barclays launched their first Thriving Local Economies initiative in Kilmarnock as a result of their partnership with The Halo Urban Regeneration, with a particular focus on strategies to boost and enhance the economy of Kilmarnock. Economic performance The HALO Urban Regeneration company aims to create and sustain over 1,500 jobs within Kilmarnock as well as a projected contribution of £205 million in Gross Domestic Product revenue to the Economy of Scotland. PRA Group Ayrshire College As part of the sale of the 32-acre site by Diageo, Ayrshire College was granted part of the site which neighbours the HQ and office space of The Halo Urban Regeneration. Due to the close proximity and sharing the site, the company has formed an ambitious partnership with Ayrshire College to ensure the integration of the college with The HALO Urban Regeneration to progress a deep range of practical learning experiences for students, as well as developing new qualifications for Ayrshire College students, ranging from qualifications within the construction section, digital skills market, as well as social care and design. The HALO Urban Regeneration seeks to create a skilled workforce within Kilmarnock and Ayrshire as a result of its educational and business partnership with Ayrshire College. In 2020, an NPA qualification was founded in collaboration with The HALO Urban Regeneration, Ayrshire College and construction contractors Morrisons Construction, with students able to access work placements on site during the construction process site. Board of management Marie Macklin CBE, founder and executive chair The current board composition of the HALO Urban Regeneration consists of: Marie Macklin CBE, Founder and executive chair Derek Weir, Managing director Drew Macklin, Project director Gary Deans, Financial and enterprise director Bill Stafford, non-executive director Jim McMahon, non-executive director See also Kilmarnock East Ayrshire Johnnie Walker, the site on which the HALO Urban Regeneration has been constructed Economy of Scotland Kilmarnock College Ayrshire College References Buildings and structures in Kilmarnock Companies of Scotland Organisations based in East Ayrshire 2015 establishments in Scotland Economy of Scotland Business organisations based in Scotland Urban renewal Redevelopment Government aid programs Entrepreneurship organizations Buildings and structures completed in 2021
HALO Urban Regeneration
[ "Engineering" ]
1,846
[ "Construction", "Redevelopment" ]