text
stringlengths
2
132k
source
dict
References == ÷⊈⊂⊃⊅
{ "page_id": 44632934, "source": null, "title": "Conservative replacement" }
Renate Loll (born 19 June 1962, Aachen) Is a German physicist. She is a Professor in Theoretical Physics at the Institute for Mathematics, Astrophysics and Particle Physics of the Radboud University in Nijmegen, Netherlands. She previously worked at the Institute for Theoretical Physics of Utrecht University. She received her Ph.D. from Imperial College, London, in 1989. In 2001 she joined the permanent staff of the ITP, after spending several years at the Max Planck Institute for Gravitational Physics in Golm, Germany. With Jan Ambjørn and Polish physicist Jerzy Jurkiewicz she helped develop a new approach to nonperturbative quantization of gravity, that of Causal Dynamical Triangulations. She has been a member of the Royal Netherlands Academy of Arts and Sciences since 2015. == References == == External links == Prof Loll's website Loll, R.; Ambjorn, J.; Jurkiewicz, J. (2006). "The Universe from Scratch". Contemporary Physics. 47 (2): 103–117. arXiv:hep-th/0509010. Bibcode:2006ConPh..47..103A. doi:10.1080/00107510600603344. S2CID 6087228. Wood, Charlie (25 May 2023). "The Physicist Who Glues Together Universes". Quanta Magazine.
{ "page_id": 4918119, "source": null, "title": "Renate Loll" }
Supercritical water oxidation (SCWO) is a process that occurs in water at temperatures and pressures above a mixture's thermodynamic critical point. Under these conditions water becomes a fluid with unique properties that can be used to advantage in the destruction of recalcitrant and hazardous wastes such as polychlorinated biphenyls (PCB) or per- and polyfluoroalkyl substances (PFAS). Supercritical water has a density between that of water vapor and liquid at standard conditions, and exhibits high gas-like diffusion rates along with high liquid-like collision rates. In addition, the behavior of water as a solvent is altered (in comparison to that of subcritical liquid water) - it behaves much less like a polar solvent. As a result, the solubility behavior is "reversed" so that oxygen, and organics such as chlorinated hydrocarbons become soluble in the water, allowing single-phase reaction of aqueous waste with a dissolved oxidizer. The reversed solubility also causes salts to precipitate out of solution, meaning they can be treated using conventional methods for solid-waste residuals. Efficient oxidation reactions occur at low temperature (400-650 °C) with reduced NOx production. SCWO can be classified as green chemistry or as a clean technology. The elevated pressures and temperatures required for SCWO are routinely encountered in industrial applications such as petroleum refining and chemical synthesis. A unique addition (mostly of academic interest) to the world of supercritical water (SCW) oxidation is generating high-pressure flames inside the SCW medium. The pioneer works on high-pressure supercritical water flames were carried out by professor EU Franck at the German University of Karlsruhe in the late 80s. The works were mainly aimed at anticipating conditions which would cause spontaneous generation of non-desirable flames in the flameless SCW oxidation process. These flames would cause instabilities to the system and its components. ETH Zurich pursued the investigation of hydrothermal flames
{ "page_id": 265064, "source": null, "title": "Supercritical water oxidation" }
in continuously operated reactors. The rising needs for waste treatment and destruction methods motivated a Japanese Group in the Ebara Corporation to explore SCW flames as an environmental tool. Research on hydrothermal flames has also begun at NASA Glenn Research Center in Cleveland, Ohio. == Basic research == Basic research on supercritical water oxidation was undertaken in the 1990s at Sandia National Laboratory's Combustion Research Facility (CRF), in Livermore, CA. Originally proposed as a hazardous waste destruction technology in response to the Kyoto protocol, multiple waste streams were studied by Steven F. Rice and Russ Hanush, and hydrothermal (supercritical water) flames were investigated by Richard R. Steeper and Jason D. Aiken. Among the waste streams studied were military dyes and pyrotechnics, methanol, and isopropyl alcohol. Hydrogen peroxide was used as an oxidizing agent, and Eric Croiset was tasked with detailed measurements of the decomposition of hydrogen peroxide at supercritical water conditions. In mid-1992, Thomas G. McGuinness, PE invented what is now known as the "transpiring-wall SCWO reactor" (TWR) while seconded to Los Alamos National Laboratory on behalf of Summit Research Corporation. McGuinness subsequently received the first US patent for a TWR in early 1995. The TWR was designed to mitigate problems of salt/solids deposition, corrosion and thermal limitations occurring in other SCWO reactor designs (eg. tubular & vat-type reactors) at the time. The upper part of the vertical reactor incorporates a permeable liner through which a clean fluid permeates to help prevent salts and other solids from accumulating at the inner surface of the liner. The liner also insulates the outer pressure containment vessel from high temperatures within the reaction zone. The liner can be manufactured from a variety of materials resistant to corrosion and high reaction temperatures. The bottom end of the TWR incorporates a "quench cooler" for cooling
{ "page_id": 265064, "source": null, "title": "Supercritical water oxidation" }
the reaction byproducts while neutralizing any components that might form acids during transition to subcritical temperature. Proof-of-concept and performance advantages of the TWR for a variety of feedstocks was demonstrated by Eckhard Dinjus and Johannes Abeln at Forschungszentrum Karlsruhe (FZK), via direct comparison between a TWR and an adjacent tubular reactor. Major engineering challenges were associated with the deposition of salts and chemical corrosion in these supercritical water reactors. Anthony Lajeunesse led the team investigating these issues. To address these issues Lajeunesse designed a transpiring wall reactor which introduced a pressure differential through the walls of an inner sleeve filled with pores to continuously rinse the inner walls of the reactor with fresh water. Russ Hanush was charged with the construction and operation of the supercritical fluids reactor (SFR) used for these studies. Among its design intricacies were the Inconel 625 alloy necessary for operation at such extreme temperatures and pressures, and the design of the high-pressure, high-temperature optical cells used for photometric access to the reacting flows which incorporated 24 carat gold pressure seals and sapphire windows. == Commercial applications == Several companies in the United States are now working to commercialize supercritical reactors to destroy hazardous wastes. Widespread commercial application of SCWO technology requires a reactor design capable of resisting fouling and corrosion under supercritical conditions. In Japan a number of commercial SCWO applications exist, among them one unit for treatment of halogenated waste built by Organo. In Korea two commercial size units have been built by Hanwha. In Europe, Chematur Engineering AB of Sweden commercialized the SCWO technology for treatment of spent chemical catalysts to recover the precious metal, the AquaCat process. The unit has been built for Johnson Matthey in the UK. It is the only commercial SCWO unit in Europe and with its capacity of
{ "page_id": 265064, "source": null, "title": "Supercritical water oxidation" }
3000 l/h it is the largest SCWO unit in the world. Chematur's Super Critical Fluids technology was acquired by SCFI Group (Cork, Ireland) who are actively commercializing the Aqua Critox SCWO process for treatment of sludge, e.g. de-inking sludge and sewage sludge. Many long duration trials on these applications have been made and thanks to the high destruction efficiency of 99.9%+ the solid residue after the SCWO process is well suited for recycling – in the case of de-inking sludge as paper filler and in the case of sewage sludge as phosphorus and coagulant. SCFI Group operate a 250 l/h Aqua Critox demonstration plant in Cork, Ireland. AquaNova Technologies, Inc. [1] is actively commercializing their 2nd-generation transpiring-wall SCWO reactor ("TWR") with a focus on waste treatment and renewable energy applications. AquaNova's patent-pending TWR-SCWO technology is projected to treat a broad variety of wastes, including PFAS, while generating electric power with improved system thermal efficiency. AquaNova's paradigm-changing technology is designed to operate at supercritical and sub-critical pressures, and at higher reaction temperatures than traditional SCWO technology. AquaNova is targeting larger-scale industrial applications. AquaNova Technologies was founded by Tom McGuinness, PE, who is the original inventor of the transpiring-wall reactor (TWR) under US patent 5,384,051. 374Water Inc. is a company offering commercial SCWO systems that convert organic wastes to clean water, energy and minerals. It is spun out after more than seven years of research and development funded by the Bill & Melinda Gates Foundation to Prof. Deshusses laboratory based at Duke University. The founders of 374Water, Prof. Marc Deshusses and Kobe Nagar, possess the waste processing reactor patent relevant to SCWO. 374Water is actively commercializing its AirSCWO systems for the treatment of biosolids and wastewater sludges, organic chemical wastes, and PFAS wastes including unspent Aqueous Film Forming Foams (AFFFs), rinsates or
{ "page_id": 265064, "source": null, "title": "Supercritical water oxidation" }
spent resins and adsorption media. The first commercial sale was announced in February 2022. Aquarden Technologies (Skaevinge, Denmark) provides modular SCWO plants for the destruction of hazardous pollutants such as PFAS, pesticides, and other problematic hydrocarbons in industrial wastestreams. Aquarden is also providing remediation of hazardous energetic wastes and chemical warfare agents with SCWO, where a full-scale SCWO system has been operating for some years in France for the Defense Industry. Revive Environmental Technology, based in the United States, has commercialized a transportable SCWO-based system known as the PFAS Annihilator® for the destruction of per- and polyfluoroalkyl substances (PFAS). The system has demonstrated 99.99% destruction efficiency across a broad range of PFAS compounds, including long-chain and short-chain variants, in diverse matrices such as landfill leachate, aqueous film-forming foams (AFFFs), and industrial wastewater. The technology has been proven both in laboratory settings and in permitted commercial operations in Columbus, Ohio, and Grand Rapids, Michigan, with third-party laboratories validating its performance through a formal certificate of destruction protocol. == See also == Supercritical fluid Wet oxidation Incineration == References == == External links == There are some research groups working in this topic throughout the world: The Deshusses lab at Duke University has a Nix1 (1 ton/day) prototype in Durham, North Carolina SCFI have a working AquaCritox A10 plant in Cork (Ireland) UVa High Pressure Processes Group (Spain) Clean Technology Group (UK) FZK Karlsruhe (Germany) ETH Zurich, Transport Processes and Reactions Laboratory (Switzerland) UCL (London, UK) Clean Materials technology group working on Continuous Hydrothermal Flow Synthesis UBC (Vancouver, BC) Mechanical Engineering Research Activities, including projects on SCWO Turbosystems Engineering SCWO technology Universidad de Cádiz (UCA). Supercritical Fluids Group
{ "page_id": 265064, "source": null, "title": "Supercritical water oxidation" }
The Lambda-CDM, Lambda cold dark matter, or ΛCDM model is a mathematical model of the Big Bang theory with three major components: a cosmological constant, denoted by lambda (Λ), associated with dark energy; the postulated cold dark matter, denoted by CDM; ordinary matter. It is the current standard model of Big Bang cosmology, as it is the simplest model that provides a reasonably good account of: the existence and structure of the cosmic microwave background; the large-scale structure in the distribution of galaxies; the observed abundances of hydrogen (including deuterium), helium, and lithium; the accelerating expansion of the universe observed in the light from distant galaxies and supernovae. The model assumes that general relativity is the correct theory of gravity on cosmological scales. It emerged in the late 1990s as a concordance cosmology, after a period when disparate observed properties of the universe appeared mutually inconsistent, and there was no consensus on the makeup of the energy density of the universe. The ΛCDM model has been successful in modeling broad collection of astronomical observations over decades. Remaining issues challenge the assumptions of the ΛCDM model and have led to many alternative models. == Overview == The ΛCDM model is based on three postulates on the structure of spacetime:: 227 The cosmological principle, that the universe is the same everywhere and in all directions, and that it is expanding, A postulate by Hermann Weyl that the lines of spacetime (geodesics) intersect at only one point, where time along each line can be synchronized; the behavior resembles an expanding perfect fluid,: 175 general relativity that relates the geometry of spacetime to the distribution of matter and energy. This combination greatly simplifies the equations of general relativity into a form called the Friedmann equations. These equations specify the evolution of the scale factor
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
of the universe in terms of the pressure and density of a perfect fluid. The evolving density is composed of different kinds of energy and matter, each with its own role in affecting the scale factor.: 7 For example, a model might include baryons, photons, neutrinos, and dark matter.: 25.1.1 These component densities become parameters extracted when the model is constrained to match astrophysical observations. The model aims to describe the observable universe from approximately 0.1 s to the present.: 605 The most accurate observations which are sensitive to the component densities are consequences of statistical inhomogeneity called "perturbations" in the early universe. Since the Friedmann equations assume homogeneity, additional theory must be added before comparison to experiments. Inflation is a simple model producing perturbations by postulating an extremely rapid expansion early in the universe that separates quantum fluctuations before they can equilibrate. The perturbations are characterized by additional parameters also determined by matching observations.: 25.1.2 Finally, the light which will become astronomical observations must pass through the universe. The latter part of that journey will pass through ionized space, where the electrons can scatter the light, altering the anisotropies. This effect is characterized by one additional parameter.: 25.1.3 The ΛCDM model includes an expansion of metric space that is well documented, both as the redshift of prominent spectral absorption or emission lines in the light from distant galaxies, and as the time dilation in the light decay of supernova luminosity curves. Both effects are attributed to a Doppler shift in electromagnetic radiation as it travels across expanding space. Although this expansion increases the distance between objects that are not under shared gravitational influence, it does not increase the size of the objects (e.g. galaxies) in space. Also, since it originates from ordinary general relativity, it, like general relativity, allows
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
for distant galaxies to recede from each other at speeds greater than the speed of light; local expansion is less than the speed of light, but expansion summed across great distances can collectively exceed the speed of light. The letter Λ (lambda) represents the cosmological constant, which is associated with a vacuum energy or dark energy in empty space that is used to explain the contemporary accelerating expansion of space against the attractive effects of gravity. A cosmological constant has negative pressure, p = − ρ c 2 {\displaystyle p=-\rho c^{2}} , which contributes to the stress–energy tensor that, according to the general theory of relativity, causes accelerating expansion. The fraction of the total energy density of our (flat or almost flat) universe that is dark energy, Ω Λ {\displaystyle \Omega _{\Lambda }} , is estimated to be 0.669 ± 0.038 based on the 2018 Dark Energy Survey results using Type Ia supernovae or 0.6847±0.0073 based on the 2018 release of Planck satellite data, or more than 68.3% (2018 estimate) of the mass–energy density of the universe. Dark matter is postulated in order to account for gravitational effects observed in very large-scale structures (the "non-keplerian" rotation curves of galaxies; the gravitational lensing of light by galaxy clusters; and the enhanced clustering of galaxies) that cannot be accounted for by the quantity of observed matter. The ΛCDM model proposes specifically cold dark matter, hypothesized as: Non-baryonic: Consists of matter other than protons and neutrons (and electrons, by convention, although electrons are not baryons) Cold: Its velocity is far less than the speed of light at the epoch of radiation–matter equality (thus neutrinos are excluded, being non-baryonic but not cold) Dissipationless: Cannot cool by radiating photons Collisionless: Dark matter particles interact with each other and other particles only through gravity and possibly
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
the weak force Dark matter constitutes about 26.5% of the mass–energy density of the universe. The remaining 4.9% comprises all ordinary matter observed as atoms, chemical elements, gas and plasma, the stuff of which visible planets, stars and galaxies are made. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10% of the ordinary matter contribution to the mass–energy density of the universe. The model includes a single originating event, the "Big Bang", which was not an explosion but the abrupt appearance of expanding spacetime containing radiation at temperatures of around 1015 K. This was immediately (within 10−29 seconds) followed by an exponential expansion of space by a scale multiplier of 1027 or more, known as cosmic inflation. The early universe remained hot (above 10 000 K) for several hundred thousand years, a state that is detectable as a residual cosmic microwave background, or CMB, a very low-energy radiation emanating from all parts of the sky. The "Big Bang" scenario, with cosmic inflation and standard particle physics, is the only cosmological model consistent with the observed continuing expansion of space, the observed distribution of lighter elements in the universe (hydrogen, helium, and lithium), and the spatial texture of minute irregularities (anisotropies) in the CMB radiation. Cosmic inflation also addresses the "horizon problem" in the CMB; indeed, it seems likely that the universe is larger than the observable particle horizon. == Cosmic expansion history == The expansion of the universe is parameterized by a dimensionless scale factor a = a ( t ) {\displaystyle a=a(t)} (with time t {\displaystyle t} counted from the birth of the universe), defined relative to the present time, so a 0 = a ( t 0 ) = 1 {\displaystyle a_{0}=a(t_{0})=1}
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
; the usual convention in cosmology is that subscript 0 denotes present-day values, so t 0 {\displaystyle t_{0}} denotes the age of the universe. The scale factor is related to the observed redshift z {\displaystyle z} of the light emitted at time t e m {\displaystyle t_{\mathrm {em} }} by a ( t em ) = 1 1 + z . {\displaystyle a(t_{\text{em}})={\frac {1}{1+z}}\,.} The expansion rate is described by the time-dependent Hubble parameter, H ( t ) {\displaystyle H(t)} , defined as H ( t ) ≡ a ˙ a , {\displaystyle H(t)\equiv {\frac {\dot {a}}{a}},} where a ˙ {\displaystyle {\dot {a}}} is the time-derivative of the scale factor. The first Friedmann equation gives the expansion rate in terms of the matter+radiation density ρ {\displaystyle \rho } , the curvature k {\displaystyle k} , and the cosmological constant Λ {\displaystyle \Lambda } , H 2 = ( a ˙ a ) 2 = 8 π G 3 ρ − k c 2 a 2 + Λ c 2 3 , {\displaystyle H^{2}=\left({\frac {\dot {a}}{a}}\right)^{2}={\frac {8\pi G}{3}}\rho -{\frac {kc^{2}}{a^{2}}}+{\frac {\Lambda c^{2}}{3}},} where, as usual c {\displaystyle c} is the speed of light and G {\displaystyle G} is the gravitational constant. A critical density ρ c r i t {\displaystyle \rho _{\mathrm {crit} }} is the present-day density, which gives zero curvature k {\displaystyle k} , assuming the cosmological constant Λ {\displaystyle \Lambda } is zero, regardless of its actual value. Substituting these conditions to the Friedmann equation gives ρ c r i t = 3 H 0 2 8 π G = 1.878 47 ( 23 ) × 10 − 26 h 2 k g ⋅ m − 3 , {\displaystyle \rho _{\mathrm {crit} }={\frac {3H_{0}^{2}}{8\pi G}}=1.878\;47(23)\times 10^{-26}\;h^{2}\;\mathrm {kg{\cdot }m^{-3}} ,} where h ≡ H 0 / ( 100 k
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
m ⋅ s − 1 ⋅ M p c − 1 ) {\displaystyle h\equiv H_{0}/(100\;\mathrm {km{\cdot }s^{-1}{\cdot }Mpc^{-1}} )} is the reduced Hubble constant. If the cosmological constant were actually zero, the critical density would also mark the dividing line between eventual recollapse of the universe to a Big Crunch, or unlimited expansion. For the Lambda-CDM model with a positive cosmological constant (as observed), the universe is predicted to expand forever regardless of whether the total density is slightly above or below the critical density; though other outcomes are possible in extended models where the dark energy is not constant but actually time-dependent. The present-day density parameter Ω x {\displaystyle \Omega _{x}} for various species is defined as the dimensionless ratio: 74 Ω x ≡ ρ x ( t = t 0 ) ρ c r i t = 8 π G ρ x ( t = t 0 ) 3 H 0 2 {\displaystyle \Omega _{x}\equiv {\frac {\rho _{x}(t=t_{0})}{\rho _{\mathrm {crit} }}}={\frac {8\pi G\rho _{x}(t=t_{0})}{3H_{0}^{2}}}} where the subscript x {\displaystyle x} is one of b {\displaystyle \mathrm {b} } for baryons, c {\displaystyle \mathrm {c} } for cold dark matter, r a d {\displaystyle \mathrm {rad} } for radiation (photons plus relativistic neutrinos), and Λ {\displaystyle \Lambda } for dark energy. Since the densities of various species scale as different powers of a {\displaystyle a} , e.g. a − 3 {\displaystyle a^{-3}} for matter etc., the Friedmann equation can be conveniently rewritten in terms of the various density parameters as H ( a ) ≡ a ˙ a = H 0 ( Ω c + Ω b ) a − 3 + Ω r a d a − 4 + Ω k a − 2 + Ω Λ a − 3 ( 1 + w ) , {\displaystyle H(a)\equiv
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
{\frac {\dot {a}}{a}}=H_{0}{\sqrt {(\Omega _{\rm {c}}+\Omega _{\rm {b}})a^{-3}+\Omega _{\mathrm {rad} }a^{-4}+\Omega _{k}a^{-2}+\Omega _{\Lambda }a^{-3(1+w)}}},} where w {\displaystyle w} is the equation of state parameter of dark energy, and assuming negligible neutrino mass (significant neutrino mass requires a more complex equation). The various Ω {\displaystyle \Omega } parameters add up to 1 {\displaystyle 1} by construction. In the general case this is integrated by computer to give the expansion history a ( t ) {\displaystyle a(t)} and also observable distance–redshift relations for any chosen values of the cosmological parameters, which can then be compared with observations such as supernovae and baryon acoustic oscillations. In the minimal 6-parameter Lambda-CDM model, it is assumed that curvature Ω k {\displaystyle \Omega _{k}} is zero and w = − 1 {\displaystyle w=-1} , so this simplifies to H ( a ) = H 0 Ω m a − 3 + Ω r a d a − 4 + Ω Λ {\displaystyle H(a)=H_{0}{\sqrt {\Omega _{\rm {m}}a^{-3}+\Omega _{\mathrm {rad} }a^{-4}+\Omega _{\Lambda }}}} Observations show that the radiation density is very small today, Ω rad ∼ 10 − 4 {\displaystyle \Omega _{\text{rad}}\sim 10^{-4}} ; if this term is neglected the above has an analytic solution a ( t ) = ( Ω m / Ω Λ ) 1 / 3 sinh 2 / 3 ⁡ ( t / t Λ ) {\displaystyle a(t)=(\Omega _{\rm {m}}/\Omega _{\Lambda })^{1/3}\,\sinh ^{2/3}(t/t_{\Lambda })} where t Λ ≡ 2 / ( 3 H 0 Ω Λ ) ; {\displaystyle t_{\Lambda }\equiv 2/(3H_{0}{\sqrt {\Omega _{\Lambda }}})\ ;} this is fairly accurate for a > 0.01 {\displaystyle a>0.01} or t > 10 {\displaystyle t>10} million years. Solving for a ( t ) = 1 {\displaystyle a(t)=1} gives the present age of the universe t 0 {\displaystyle t_{0}} in terms of the other parameters. It follows
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
that the transition from decelerating to accelerating expansion (the second derivative a ¨ {\displaystyle {\ddot {a}}} crossing zero) occurred when a = ( Ω m / 2 Ω Λ ) 1 / 3 , {\displaystyle a=(\Omega _{\rm {m}}/2\Omega _{\Lambda })^{1/3},} which evaluates to a ∼ 0.6 {\displaystyle a\sim 0.6} or z ∼ 0.66 {\displaystyle z\sim 0.66} for the best-fit parameters estimated from the Planck spacecraft. == Parameters == Multiple variants of the ΛCDM model are used with some differences in parameters.: 25.1 One such set is outlined in the table below. The Planck collaboration version of the ΛCDM model is based on six parameters: baryon density parameter; dark matter density parameter; scalar spectral index; two parameters related to curvature fluctuation amplitude; and the probability that photons from the early universe will be scattered once on route (called reionization optical depth). Six is the smallest number of parameters needed to give an acceptable fit to the observations; other possible parameters are fixed at "natural" values, e.g. total density parameter = 1.00, dark energy equation of state = −1. The parameter values, and uncertainties, are estimated using computer searches to locate the region of parameter space providing an acceptable match to cosmological observations. From these six parameters, the other model values, such as the Hubble constant and the dark energy density, can be calculated. == Historical development == The discovery of the cosmic microwave background (CMB) in 1964 confirmed a key prediction of the Big Bang cosmology. From that point on, it was generally accepted that the universe started in a hot, dense state and has been expanding over time. The rate of expansion depends on the types of matter and energy present in the universe, and in particular, whether the total density is above or below the so-called critical density. During
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
the 1970s, most attention focused on pure-baryonic models, but there were serious challenges explaining the formation of galaxies, given the small anisotropies in the CMB (upper limits at that time). In the early 1980s, it was realized that this could be resolved if cold dark matter dominated over the baryons, and the theory of cosmic inflation motivated models with critical density. During the 1980s, most research focused on cold dark matter with critical density in matter, around 95% CDM and 5% baryons: these showed success at forming galaxies and clusters of galaxies, but problems remained; notably, the model required a Hubble constant lower than preferred by observations, and observations around 1988–1990 showed more large-scale galaxy clustering than predicted. These difficulties sharpened with the discovery of CMB anisotropy by the Cosmic Background Explorer in 1992, and several modified CDM models, including ΛCDM and mixed cold and hot dark matter, came under active consideration through the mid-1990s. The ΛCDM model then became the leading model following the observations of accelerating expansion in 1998, and was quickly supported by other observations: in 2000, the BOOMERanG microwave background experiment measured the total (matter–energy) density to be close to 100% of critical, whereas in 2001 the 2dFGRS galaxy redshift survey measured the matter density to be near 25%; the large difference between these values supports a positive Λ or dark energy. Much more precise spacecraft measurements of the microwave background from WMAP in 2003–2010 and Planck in 2013–2015 have continued to support the model and pin down the parameter values, most of which are constrained below 1 percent uncertainty. == Successes == Among all cosmological models, the ΛCDM model has been the most successful; it describes a wide range of astronomical observations with remarkable accuracy.: 58 The notable successes include: Accurate modeling the high-precision CMB
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
angular distribution measure by the Planck mission and Atacama Cosmology Telescope. Accurate description of the linear E-mode polarization of the CMB radiation due to fluctuations on the surface of last scattering events. Prediction of the observed B-mode polarization of the CMB light due to primordial gravitational waves. Observations of H2O emission spectra from a galaxy 12.8 billion light years away consistent with molecules excited by cosmic background radiation much more energetic – 16-20K – than the CMB we observe now, 3K. Predictions of the primordial abundance of deuterium as a result of Big Bang nucleosynthesis. The observed abundance matches the one derived from the nucleosynthesis model with the value for baryon density derived from CMB measurements.: 4.1.2 In addition to explaining many pre-2000 observations, the model has made a number of successful predictions: notably the existence of the baryon acoustic oscillation feature, discovered in 2005 in the predicted location; and the statistics of weak gravitational lensing, first observed in 2000 by several teams. The polarization of the CMB, discovered in 2002 by DASI, has been successfully predicted by the model: in the 2015 Planck data release, there are seven observed peaks in the temperature (TT) power spectrum, six peaks in the temperature–polarization (TE) cross spectrum, and five peaks in the polarization (EE) spectrum. The six free parameters can be well constrained by the TT spectrum alone, and then the TE and EE spectra can be predicted theoretically to few-percent precision with no further adjustments allowed. == Challenges == Despite the widespread success of ΛCDM in matching observations of our universe, cosmologists believe that the model may be an approximation of a more fundamental model. === Lack of detection === Extensive searches for dark matter particles have so far shown no well-agreed detection, while dark energy may be almost impossible to
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
detect in a laboratory, and its value is extremely small compared to vacuum energy theoretical predictions. === Violations of the cosmological principle === The ΛCDM model, like all models built on the Friedmann–Lemaître–Robertson–Walker metric, assume that the universe looks the same in all directions (isotropy) and from every location (homogeneity) if you look at a large enough scale: "the universe looks the same whoever and wherever you are." This cosmological principle allows a metric, Friedmann–Lemaître–Robertson–Walker metric, to be derived and developed into a theory to compare to experiments. Without the principle, a metric would need to be extracted from astronomical data, which may not be possible.: 408 The assumptions were carried over into the ΛCDM model. However, some findings suggested violations of the cosmological principle. ==== Violations of isotropy ==== Evidence from galaxy clusters, quasars, and type Ia supernovae suggest that isotropy is violated on large scales. Data from the Planck Mission shows hemispheric bias in the cosmic microwave background in two respects: one with respect to average temperature (i.e. temperature fluctuations), the second with respect to larger variations in the degree of perturbations (i.e. densities). The European Space Agency (the governing body of the Planck Mission) has concluded that these anisotropies in the CMB are, in fact, statistically significant and can no longer be ignored. Already in 1967, Dennis Sciama predicted that the cosmic microwave background has a significant dipole anisotropy. In recent years, the CMB dipole has been tested, and the results suggest our motion with respect to distant radio galaxies and quasars differs from our motion with respect to the cosmic microwave background. The same conclusion has been reached in recent studies of the Hubble diagram of Type Ia supernovae and quasars. This contradicts the cosmological principle. The CMB dipole is hinted at through a number of
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
other observations. First, even within the cosmic microwave background, there are curious directional alignments and an anomalous parity asymmetry that may have an origin in the CMB dipole. Separately, the CMB dipole direction has emerged as a preferred direction in studies of alignments in quasar polarizations, scaling relations in galaxy clusters, strong lensing time delay, Type Ia supernovae, and quasars and gamma-ray bursts as standard candles. The fact that all these independent observables, based on different physics, are tracking the CMB dipole direction suggests that the Universe is anisotropic in the direction of the CMB dipole. Nevertheless, some authors have stated that the universe around Earth is isotropic at high significance by studies of the combined cosmic microwave background temperature and polarization maps. ==== Violations of homogeneity ==== The homogeneity of the universe needed for the ΛCDM applies to very large volumes of space. N-body simulations in ΛCDM show that the spatial distribution of galaxies is statistically homogeneous if averaged over scales 260/h Mpc or more. Numerous claims of large-scale structures reported to be in conflict with the predicted scale of homogeneity for ΛCDM do not withstand statistical analysis.: 7.8 === El Gordo galaxy cluster collision === El Gordo is a massive interacting galaxy cluster in the early Universe ( z = 0.87 {\displaystyle z=0.87} ). The extreme properties of El Gordo in terms of its redshift, mass, and the collision velocity leads to strong ( 6.16 σ {\displaystyle 6.16\sigma } ) tension with the ΛCDM model. The properties of El Gordo are however consistent with cosmological simulations in the framework of MOND due to more rapid structure formation. === KBC void === The KBC void is an immense, comparatively empty region of space containing the Milky Way approximately 2 billion light-years (600 megaparsecs, Mpc) in diameter. Some authors have
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
said the existence of the KBC void violates the assumption that the CMB reflects baryonic density fluctuations at z = 1100 {\displaystyle z=1100} or Einstein's theory of general relativity, either of which would violate the ΛCDM model, while other authors have claimed that supervoids as large as the KBC void are consistent with the ΛCDM model. === Hubble tension === Statistically significant differences remain in values of the Hubble constant derived by matching the ΛCDM model to data from the "early universe", like the cosmic background radiation, compared to values derived from stellar distance measurements, called the "late universe". While systematic error in the measurements remains a possibility, many different kinds of observations agree with one of these two values of the constant. This difference, called the Hubble tension, widely acknowledged to be a major problem for the ΛCDM model. Dozens of proposals for modifications of ΛCDM or completely new models have been published to explain the Hubble tension. Among these models are many that modify the properties of dark energy or of dark matter over time, interactions between dark energy and dark matter, unified dark energy and matter, other forms of dark radiation like sterile neutrinos, modifications to the properties of gravity, or the modification of the effects of inflation, changes to the properties of elementary particles in the early universe, among others. None of these models can simultaneously explain the breadth of other cosmological data as well as ΛCDM. === S8 tension === The " S 8 {\displaystyle S_{8}} tension" is a name for another question mark for the ΛCDM model. The S 8 {\displaystyle S_{8}} parameter in the ΛCDM model quantifies the amplitude of matter fluctuations in the late universe and is defined as S 8 ≡ σ 8 Ω m / 0.3 {\displaystyle S_{8}\equiv \sigma _{8}{\sqrt
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
{\Omega _{\rm {m}}/0.3}}} Early- (e.g. from CMB data collected using the Planck observatory) and late-time (e.g. measuring weak gravitational lensing events) facilitate increasingly precise values of S 8 {\displaystyle S_{8}} . However, these two categories of measurement differ by more standard deviations than their uncertainties. This discrepancy is called the S 8 {\displaystyle S_{8}} tension. The name "tension" reflects that the disagreement is not merely between two data sets: the many sets of early- and late-time measurements agree well within their own categories, but there is an unexplained difference between values obtained from different points in the evolution of the universe. Such a tension indicates that the ΛCDM model may be incomplete or in need of correction. Some values for S 8 {\displaystyle S_{8}} are 0.832±0.013 (2020 Planck), 0.766+0.020−0.014 (2021 KIDS), 0.776±0.017 (2022 DES), 0.790+0.018−0.014 (2023 DES+KIDS), 0.769+0.031−0.034 – 0.776+0.032−0.033 (2023 HSC-SSP), 0.86±0.01 (2024 EROSITA). Values have also obtained using peculiar velocities, 0.637±0.054 (2020) and 0.776±0.033 (2020), among other methods. === Axis of evil === The "axis of evil" is a name given to a purported correlation between the plane of the Solar System and aspects of the cosmic microwave background (CMB). Such a correlation would give the plane of the Solar System and hence the location of Earth a greater significance than might be expected by chance, a result which has been claimed to be evidence of a departure from the Copernican principle. However, a 2016 study compared isotropic and anisotropic cosmological models against WMAP and Planck data and found no evidence for anisotropy. === Cosmological lithium problem === The actual observable amount of lithium in the universe is less than the calculated amount from the ΛCDM model by a factor of 3–4.: 141 If every calculation is correct, then solutions beyond the existing ΛCDM model might be needed.
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
=== Shape of the universe === The ΛCDM model assumes that the shape of the universe is of zero curvature (is flat) and has an undetermined topology. In 2019, interpretation of Planck data suggested that the curvature of the universe might be positive (often called "closed"), which would contradict the ΛCDM model. Some authors have suggested that the Planck data detecting a positive curvature could be evidence of a local inhomogeneity in the curvature of the universe rather than the universe actually being globally a 3-manifold of positive curvature. === Violations of the strong equivalence principle === The ΛCDM model assumes that the strong equivalence principle is true. However, in 2020 a group of astronomers analyzed data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) sample, together with estimates of the large-scale external gravitational field from an all-sky galaxy catalog. They concluded that there was highly statistically significant evidence of violations of the strong equivalence principle in weak gravitational fields in the vicinity of rotationally supported galaxies. They observed an effect inconsistent with tidal effects in the ΛCDM model. These results have been challenged as failing to consider inaccuracies in the rotation curves and correlations between galaxy properties and clustering strength. and as inconsistent with similar analysis of other galaxies. === Cold dark matter discrepancies === Several discrepancies between the predictions of cold dark matter in the ΛCDM model and observations of galaxies and their clustering have arisen. Some of these problems have proposed solutions, but it remains unclear whether they can be solved without abandoning the ΛCDM model. Milgrom, McGaugh, and Kroupa have criticized the dark matter portions of the theory from the perspective of galaxy formation models and supporting the alternative modified Newtonian dynamics (MOND) theory, which requires a modification of the Einstein field equations and the
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
Friedmann equations as seen in proposals such as modified gravity theory (MOG theory) or tensor–vector–scalar gravity theory (TeVeS theory). Other proposals by theoretical astrophysicists of cosmological alternatives to Einstein's general relativity that attempt to account for dark energy or dark matter include f(R) gravity, scalar–tensor theories such as galileon theories (see Galilean invariance), brane cosmologies, the DGP model, and massive gravity and its extensions such as bimetric gravity. ==== Cuspy halo problem ==== The density distributions of dark matter halos in cold dark matter simulations (at least those that do not include the impact of baryonic feedback) are much more peaked than what is observed in galaxies by investigating their rotation curves. ==== Dwarf galaxy problem ==== Cold dark matter simulations predict large numbers of small dark matter halos, more numerous than the number of small dwarf galaxies that are observed around galaxies like the Milky Way. ==== Satellite disk problem ==== Dwarf galaxies around the Milky Way and Andromeda galaxies are observed to be orbiting in thin, planar structures whereas the simulations predict that they should be distributed randomly about their parent galaxies. However, latest research suggests this seemingly bizarre alignment is just a quirk which will dissolve over time. ==== High-velocity galaxy problem ==== Galaxies in the NGC 3109 association are moving away too rapidly to be consistent with expectations in the ΛCDM model. In this framework, NGC 3109 is too massive and distant from the Local Group for it to have been flung out in a three-body interaction involving the Milky Way or Andromeda Galaxy. ==== Galaxy morphology problem ==== If galaxies grew hierarchically, then massive galaxies required many mergers. Major mergers inevitably create a classical bulge. On the contrary, about 80% of observed galaxies give evidence of no such bulges, and giant pure-disc galaxies are commonplace.
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
The tension can be quantified by comparing the observed distribution of galaxy shapes today with predictions from high-resolution hydrodynamical cosmological simulations in the ΛCDM framework, revealing a highly significant problem that is unlikely to be solved by improving the resolution of the simulations. The high bulgeless fraction was nearly constant for 8 billion years. ==== Fast galaxy bar problem ==== If galaxies were embedded within massive halos of cold dark matter, then the bars that often develop in their central regions would be slowed down by dynamical friction with the halo. This is in serious tension with the fact that observed galaxy bars are typically fast. ==== Small scale crisis ==== Comparison of the model with observations may have some problems on sub-galaxy scales, possibly predicting too many dwarf galaxies and too much dark matter in the innermost regions of galaxies. This problem is called the "small scale crisis". These small scales are harder to resolve in computer simulations, so it is not yet clear whether the problem is the simulations, non-standard properties of dark matter, or a more radical error in the model. ==== High redshift galaxies ==== Observations from the James Webb Space Telescope have resulted in various galaxies confirmed by spectroscopy at high redshift, such as JADES-GS-z13-0 at cosmological redshift of 13.2. Other candidate galaxies which have not been confirmed by spectroscopy include CEERS-93316 at cosmological redshift of 16.4. Existence of surprisingly massive galaxies in the early universe challenges the preferred models describing how dark matter halos drive galaxy formation. It remains to be seen whether a revision of the Lambda-CDM model with parameters given by Planck Collaboration is necessary to resolve this issue. The discrepancies could also be explained by particular properties (stellar masses or effective volume) of the candidate galaxies, yet unknown force or particle
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
outside of the Standard Model through which dark matter interacts, more efficient baryonic matter accumulation by the dark matter halos, early dark energy models, or the hypothesized long-sought Population III stars. === Missing baryon problem === Massimo Persic and Paolo Salucci first estimated the baryonic density today present in ellipticals, spirals, groups and clusters of galaxies. They performed an integration of the baryonic mass-to-light ratio over luminosity (in the following M b / L {\textstyle M_{\rm {b}}/L} ), weighted with the luminosity function ϕ ( L ) {\textstyle \phi (L)} over the previously mentioned classes of astrophysical objects: ρ b = ∑ ∫ L ϕ ( L ) M b L d L . {\displaystyle \rho _{\rm {b}}=\sum \int L\phi (L){\frac {M_{\rm {b}}}{L}}\,dL.} The result was: Ω b = Ω ∗ + Ω gas = 2.2 × 10 − 3 + 1.5 × 10 − 3 h − 1.3 ≃ 0.003 , {\displaystyle \Omega _{\rm {b}}=\Omega _{*}+\Omega _{\text{gas}}=2.2\times 10^{-3}+1.5\times 10^{-3}\;h^{-1.3}\simeq 0.003,} where h ≃ 0.72 {\displaystyle h\simeq 0.72} . Note that this value is much lower than the prediction of standard cosmic nucleosynthesis Ω b ≃ 0.0486 {\displaystyle \Omega _{\rm {b}}\simeq 0.0486} , so that stars and gas in galaxies and in galaxy groups and clusters account for less than 10% of the primordially synthesized baryons. This issue is known as the problem of the "missing baryons". The missing baryon problem is claimed to be resolved. Using observations of the kinematic Sunyaev–Zel'dovich effect spanning more than 90% of the lifetime of the Universe, in 2021 astrophysicists found that approximately 50% of all baryonic matter is outside dark matter haloes, filling the space between galaxies. Together with the amount of baryons inside galaxies and surrounding them, the total amount of baryons in the late time Universe is compatible with early Universe
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
measurements. === Conventionalism === It has been argued that the ΛCDM model has adopted conventionalist stratagems, rendering it unfalsifiable in the sense defined by Karl Popper. When faced with new data not in accord with a prevailing model, the conventionalist will find ways to adapt the theory rather than declare it false. Thus dark matter was added after the observations of anomalous galaxy rotation rates. Thomas Kuhn viewed the process differently, as "problem solving" within the existing paradigm. == Extended models == Extended models allow one or more of the "fixed" parameters above to vary, in addition to the basic six; so these models join smoothly to the basic six-parameter model in the limit that the additional parameter(s) approach the default values. For example, possible extensions of the simplest ΛCDM model allow for spatial curvature ( Ω tot {\displaystyle \Omega _{\text{tot}}} may be different from 1); or quintessence rather than a cosmological constant where the equation of state of dark energy is allowed to differ from −1. Cosmic inflation predicts tensor fluctuations (gravitational waves). Their amplitude is parameterized by the tensor-to-scalar ratio (denoted r {\displaystyle r} ), which is determined by the unknown energy scale of inflation. Other modifications allow hot dark matter in the form of neutrinos more massive than the minimal value, or a running spectral index; the latter is generally not favoured by simple cosmic inflation models. Allowing additional variable parameter(s) will generally increase the uncertainties in the standard six parameters quoted above, and may also shift the central values slightly. The table below shows results for each of the possible "6+1" scenarios with one additional variable parameter; this indicates that, as of 2015, there is no convincing evidence that any additional parameter is different from its default value. Some researchers have suggested that there is a
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
running spectral index, but no statistically significant study has revealed one. Theoretical expectations suggest that the tensor-to-scalar ratio r {\displaystyle r} should be between 0 and 0.3, and the latest results are within those limits. == See also == Bolshoi cosmological simulation Galaxy formation and evolution Illustris project List of cosmological computation software Millennium Run Weakly interacting massive particles (WIMPs) The ΛCDM model is also known as the standard model of cosmology, but is not related to the Standard Model of particle physics. Inhomogeneous cosmology == References == == Further reading == Ostriker, J. P.; Steinhardt, P. J. (1995). "Cosmic Concordance". arXiv:astro-ph/9505066. Ostriker, Jeremiah P.; Mitton, Simon (2013). Heart of Darkness: Unraveling the mysteries of the invisible universe. Princeton, NJ: Princeton University Press. ISBN 978-0-691-13430-7. Rebolo, R.; et al. (2004). "Cosmological parameter estimation using Very Small Array data out to ℓ= 1500". Monthly Notices of the Royal Astronomical Society. 353 (3): 747–759. arXiv:astro-ph/0402466. Bibcode:2004MNRAS.353..747R. doi:10.1111/j.1365-2966.2004.08102.x. S2CID 13971059. == External links == Cosmology tutorial/NedWright Millennium Simulation WMAP estimated cosmological parameters/Latest Summary
{ "page_id": 985963, "source": null, "title": "Lambda-CDM model" }
Given two interacting atoms A and B, the cophonicity () of the A-B atomic pair is a measure of the overlap of the A and B contributions to a specific range of vibrational frequencies. In the field of condensed matter physics, cophonicity is a metric aimed at the parametrization of the dynamical interactions in terms of the atomic types forming the A-B pair. In connection with other electronic and structural descriptors, such as the covalency metric or the distortion mode analysis from group theory, the A-B pair cophonicity is a guide to properly select either A or B atomic species to tune specific vibrational frequencies of a given system. The cophonicity metric has been originally designed for the study of the atomic motions in transition metal dichalcogenides, but its formulation is general and can be applied to any kind of system, irrespective of the chemical composition and stoichiometry. == Mathematical formulation == Considering the phonon density of states (pDOS) g ( ω ) {\displaystyle g(\omega )} in the first Brillouin zone, the center mass C M A {\displaystyle CM^{A}} of the atom-projected pDOS g A ( ω ) {\displaystyle g^{A}\left(\omega \right)} of an atom A is defined as C M A = ∫ ω 0 ω 1 ω g A ( ω ) d ω ∫ ω 0 ω 1 g A ( ω ) d ω {\displaystyle CM^{A}={\frac {\int \limits _{\omega _{0}}\limits ^{\omega _{1}}\omega g^{A}\left(\omega \right)d\omega }{\int \limits _{\omega _{0}}\limits ^{\omega _{1}}g^{A}\left(\omega \right)d\omega }}} where g A ( ω ) {\displaystyle g^{A}\left(\omega \right)} is the contribution of the atom A to g ( ω ) {\displaystyle g(\omega )} and ω 0 < ω 1 {\displaystyle \omega _{0}<\omega _{1}} ; the total density of states of the solid is g ( ω ) {\displaystyle g(\omega )} , defined as g
{ "page_id": 49875825, "source": null, "title": "Cophonicity" }
( ω ) = ∑ X g X ( ω ) {\displaystyle g(\omega )=\sum _{X}{g^{X}\left(\omega \right)}} and obtained by summing over all atoms X of the unit cell. The integration interval [ ω 0 , ω 1 ] {\displaystyle [\omega _{0},\omega _{1}]} is chosen in such a way that it encompasses all the phonon states relevant for the specific study. The integral at the denominator of the definition of C M A {\displaystyle CM^{A}} is the contribution of the atom A to the states in the frequency range [ ω 0 , ω 1 ] {\displaystyle [\omega _{0},\omega _{1}]} ; we call such quantity the phonicity of the A atom in that specific frequency range. The phonicity of an atom then represents the amount of phonon states that such atom contributes to form; in this respect, it can be regarded as the phonon counterpart of the atomic valence, that is the number of electrons with which an atom participates to form the electronic states of the system, counted as the integral of the atom-projected electronic density of states. Consider a generic A-B atomic pair. The relative position C p h (A-B) {\displaystyle C_{ph}{\text{(A-B)}}} of the center mass of g A ( ω ) {\displaystyle g^{A}\left(\omega \right)} with respect to the center mass of g B ( ω ) {\displaystyle g^{B}\left(\omega \right)} is given as C p h (A-B) = C M A − C M B {\displaystyle C_{ph}{\text{(A-B)}}=CM^{A}-CM^{B}} which is specified in the same units of the frequency ω {\displaystyle \omega } . A positive (negative) sign of C p h (A-B) {\displaystyle C_{ph}{\text{(A-B)}}} indicates that the A (B) atom contributes more to the high frequency modes of the specified range. The smaller is | C p h (A-B) | {\displaystyle |C_{ph}{\text{(A-B)}}|} , the higher is the mixing of the
{ "page_id": 49875825, "source": null, "title": "Cophonicity" }
A and B contributions to the frequency band, and the two atoms have the same weight in the determination of the modes specific of the considered energy range. We define the quantity C p h (A-B) {\displaystyle C_{ph}{\text{(A-B)}}} as the cophonicity of the A-B atomic pair, in analogy with the A-B bond covalency definition formulated in terms of atomic contributions to the electronic density of states. Cophonicity is then a characteristic of a specific atomic pair found in a system. According to the metric introduced above, C p h (A-B) = 0 {\displaystyle C_{ph}{\text{(A-B)}}=0} would indicate a perfect cophonocity, that is an equal participation of both atomic species to the formation of the phonon states in the considered range of energies. The comparison between two cophonicity values, obtained considering distinct atomic couples embedded in two structures with different connectivity, will provide meaningful information under certain conditions. For example, C p h (Mo-S) {\displaystyle C_{ph}{\text{(Mo-S)}}} in molybdenum disulfide bulk and nanoclusters, despite calculated in the same frequency range, could refer to distinct sets of vibrational modes; this is due to the distinct topologies that, in turn, determine different electronic environments. In this case, any change of the cophonicity by means of substitutional defects could lead to non-correlated changes of the vibrational frequencies. However, if the two structures with distinct topologies are connected by continuous geometric transformations, also the electronic environment of the atomic pairs, thus the vibrational frequencies and the associated modes, are related by such continuous transformations; in this case, the cophonicity of a specific atomic pair in one structure can be mapped into the cophonicity of the second one, granting the transferability of the definition, hence the reliability of the cophonicity comparison. == References ==
{ "page_id": 49875825, "source": null, "title": "Cophonicity" }
Menthone is a chemical compound of the monoterpene class of naturally occurring organic compounds found in a number of essential oils, one that presents with minty flavor. It is a specific pair of stereoisomers of the four possible such isomers for the chemical structure, 2-isopropyl-5-methylcyclohexanone. Of those, the stereoisoomer l-menthone—formally, the (2S,5R)-trans isomer of that structure, as shown at right—is the most abundant in nature. Menthone is structurally related to menthol, which has a secondary alcohol (>C-OH) in place of the carbon-oxygen double bond (carbonyl group) projecting from the cyclohexane ring. Menthone is obtained for commercial use after purifying essential oils pressed from Mentha species (peppermint and corn mint). It is used as a flavorant and in perfumes and cosmetics for its characteristic aromatic and minty aroma. == Occurrence == Menthone is a constituent of the essential oils of pennyroyal, peppermint, corn mint, pelargonium geraniums, and other plant species. In most essential oils, it is a minor component. Menthone was first synthesized by oxidation of menthol in 1881, before being found as a component in essential oils in 1891. Of the isomers possible for this chemical structure (see below), the one termed l-menthone—formally, the (2S,5R)-trans-2-isopropyl-5-methylcyclohexanone (see infobox and below)—is the most abundant in nature. == Physical and sensory properties == Menthone is a liquid under standard conditions, and has a density of 0.895 g/cm3. Under the same conditions, the melting point is −6 °C, and its boiling point is 207 °C. Menthone interacts cognitively with other components in food, drink, and other consumables, to present with what is termed a minty flavor. Pure l-menthone has been described as having an intensely minty clean aroma; in contrast, d-isomenthone has a "green" note, increasing levels of which are perceived to detract from the aroma quality of l-menthone. == Structure and stereochemistry ==
{ "page_id": 3214194, "source": null, "title": "Menthone" }
The structure of 2-isopropyl-5-methylcyclohexanone (menthones and isomenthones, see following) were established historically by establishing identity of natural and synthetic products after chemical synthesis of this structure from other chemical compounds of established structure; these inferential understandings have, in modern organic chemistry, been augmented by supporting mass spectrometric and spectroscopic evidence (e.g., from NMR spectroscopy and circular dichroism) to make the conclusions secure. The structure 2-isopropyl-5-methylcyclohexanone has two asymmetric carbon centers, one at each attachment point of the two alkyl group substituents, the isopropyl in the 2-position and the methyl in the 5-position of the cyclohexane framework. The spatial arrangement of atoms—the absolute configuration—at these two points are designated by the descriptors R (Latin, rectus, right) or S (L., sinister, left) based on the Cahn–Ingold–Prelog priority rules. Hence, four unique stereoisomers are possible for this structure: (2S,5S), (2R,5S), (2S,5R) and (2R,5R). The (2S,5S) and (2R,5R) stereoisomers project the isopropyl and methyl groups from the same "side" of the cyclohexane ring, are the so-called cis isomers, and are termed isomenthone; the (2R,5S) and (2S,5R) stereoisomers project the two groups on the opposite side of the ring, are the so-called trans isomers, and are referred to as menthone. Because the (2S,5R) isomer has an observed negative optical rotation, it is called l-menthone or (−)-menthone. It is the enantiomeric partner of the (2R,5S) isomer: (+)- or d-menthone. === Interconversion === Menthone and isomenthone interconvert easily, the equilibrium favoring menthone; if menthone and isomenthone are equilibrated at room temperature, the isomenthone content will reach 29%. Menthone can easily be converted to isomenthone and vice versa via a reversible epimerization reaction via an enol intermediate, which changes the direction of optical rotation, so that l-menthone becomes d-isomenthone, and d-menthone becomes l-isomenthone. == Preparation and reactivity == Menthone is obtained commercially by fractional crystallization of the oils
{ "page_id": 3214194, "source": null, "title": "Menthone" }
pressed from peppermint and cornmint, sp. Mentha. In the experimental laboratory, l-menthone may be prepared by oxidation of menthol with acidified dichromate. If the chromic acid oxidation is performed with stoichiometric oxidant in the presence of diethyl ether as co-solvent, a method introduced by H.C. Brown and colleagues in 1971, the epimerization of l-menthone to d-isomenthone is largely avoided. == History == Menthone was first described by Moriya in 1881. It was later synthesized by heating menthol with chromic acid, and its structure was later confirmed by synthesizing it from 2-isopropyl-5-methylpimelic acid. Menthone was one of the original substrates reported in the discovery of the still widely used synthetic organic chemistry transformation, the Baeyer-Villiger (B-V) oxidation, as reported by Adolf Von Baeyer and Victor Villiger in 1899; Baeyer and Villiger noted that menthone reacted with monopersulfuric acid to produce the corresponding oxacycloheptane (oxepane-type) lactone, with an oxygen atom inserted between the carbonyl carbon and the ring carbon attached to the isopropyl substituent. In 1889, Ernst Beckmann discovered that dissolving menthone in concentrated sulfuric acid gave a new ketonic material which gave an equal but opposite optical rotation to the starting material. Beckmann's inferences from his results situated menthone as a crucial player in a great mechanistic discovery in organic chemistry. Beckmann concluded that the change in structure underlying the observed opposite optical rotation was the result of an inversion of configuration at the asymmetric carbon atom next to the carbonyl group (which, at that time was believed to be the carbon atom attached to the methyl rather than the isopropyl group). He postulated that this occurred through an intermediate enol—a tautomer of the ketone—such that the original absolute configuration of that carbon atom changed as its geometry went from terahedral to trigonal planar. This report is an early example of
{ "page_id": 3214194, "source": null, "title": "Menthone" }
an inference that an otherwise undetectable intermediate was involved in a reaction mechanism, one that could account for the observed structural outcome of the reaction. == See also == Piperitone Pulegone == Further reading == == References ==
{ "page_id": 3214194, "source": null, "title": "Menthone" }
Brain-specific angiogenesis inhibitors are G-protein coupled receptors belonging to the class B secretin subfamily. Members include: Brain-specific angiogenesis inhibitor 1 Brain-specific angiogenesis inhibitor 2 Brain-specific angiogenesis inhibitor 3 == References == == External links == BAI1+protein,+human at the U.S. National Library of Medicine Medical Subject Headings (MeSH) BAI2+protein,+human at the U.S. National Library of Medicine Medical Subject Headings (MeSH) BAI3+protein,+human at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
{ "page_id": 14420850, "source": null, "title": "Brain-specific angiogenesis inhibitor" }
In chemistry the reactivity–selectivity principle or RSP states that a more reactive chemical compound or reactive intermediate is less selective in chemical reactions. In this context selectivity represents the ratio of reaction rates. This principle was generally accepted until the 1970s when too many exceptions started to appear. The principle is now considered obsolete. A classic example of perceived RSP found in older organic chemistry textbooks concerns the free radical halogenation of simple alkanes. Whereas the relatively unreactive bromine reacts with 2-methylbutane predominantly to 2-bromo-2-methylbutane, the reaction with much more reactive chlorine results in a mixture of all four regioisomers. Another example of RSP can be found in the selectivity of the reaction of certain carbocations with azides and water. The very stable triphenylmethyl carbocation derived from solvolysis of the corresponding triphenylmethyl chloride reacts 100 times faster with the azide anion than with water. When the carbocation is the very reactive tertiary adamantane carbocation (as judged from diminished rate of solvolysis) this difference is only a factor of 10. Constant or inverse relationships are just as frequent. For example, a group of 3- and 4-substituted pyridines in their reactivity quantified by their pKa show the same selectivity in their reactions with a group of alkylating reagents. The reason for the early success of RSP was that the experiments involved very reactive intermediates with reactivities close to kinetic diffusion control and as a result the more reactive intermediate appeared to react slower with the faster substrate. General relationships between reactivity and selectivity in chemical reactions can successfully be explained by Hammond's postulate. When reactivity-selectivity relationships do exist they signify different reaction modes. In one study the reactivity of two different free radical species (A, sulfur, B carbon) towards addition to simple alkenes such as acrylonitrile, vinyl acetate and acrylamide was examined.
{ "page_id": 4393843, "source": null, "title": "Reactivity–selectivity principle" }
The sulfur radical was found to be more reactive (6*108 vs. 1*107 M−1.s−1) and less selective (selectivity ratio 76 vs 1200) than the carbon radical. In this case, the effect can be explained by extending the Bell–Evans–Polanyi principle with a factor δ {\displaystyle \delta \,} accounting for transfer of charge from the reactants to the transition state of the reaction which can be calculated in silico: E a = E o + α Δ H r + β δ 2 {\displaystyle E_{a}=E_{o}+\alpha \Delta H_{r}+\beta \delta ^{2}\,} with E a {\displaystyle E_{a}\,} the activation energy and Δ H r {\displaystyle \Delta H_{r}\,} the reaction enthalpy change. With the electrophilic sulfur radical the charge transfer is largest with electron-rich alkenes such as acrylonitrile but the resulting reduction in activation energy (β is negative) is offset by a reduced enthalpy. With the nucleophilic carbon radical on the other hand both enthalpy and polar effects have the same direction thus extending the activation energy range. == References == == External links == IUPAC Compendium of Chemical Terminology (Gold Book): Reactivity–selectivity principle
{ "page_id": 4393843, "source": null, "title": "Reactivity–selectivity principle" }
Cetrimide agar is a type of agar used for the selective isolation of the gram-negative bacterium, Pseudomonas aeruginosa. As the name suggests, it contains cetrimide, which is the selective agent against alternate microbial flora. Cetrimide also enhances the production of Pseudomonas pigments such as pyocyanin and pyoverdine, which show a characteristic blue-green and yellow-green colour, respectively. Cetrimide agar is widely used in the examination of cosmetics, pharmaceuticals and clinical specimens to test for the presence of Pseudomonas aeruginosa. == References == == External links == Cetrimide Agar Description & Formulation
{ "page_id": 17238903, "source": null, "title": "Cetrimide agar" }
One-compartment kinetics for a chemical compound specifies that the uptake in the compartment is proportional to the concentration outside the compartment, and the elimination is proportional to the concentration inside the compartment. Both the compartment and the environment outside the compartment are considered to be homogeneous (well mixed).The compartment typically represents some organism (e.g. a fish or a daphnid). This model is used in the simplest versions of the DEBtox method for the quantification of effects of toxicants. == References == "One-compartment kinetics." British Journal of Anaesthetics. 1992 Oct;69(4):387-96.
{ "page_id": 5639039, "source": null, "title": "One-compartment kinetics" }
Copper carbonate may refer to : Copper (II) compounds and minerals Copper(II) carbonate proper, CuCO3 (neutral copper carbonate): a rarely seen moisture-sensitive compound. Basic copper carbonate (the "copper carbonate" of commerce), actually a copper carbonate hydroxide; which may be either Cu2CO3(OH)2: the green mineral malachite, verdigris, the pigment "green verditer" or "mountain green" Cu3(CO3)2(OH)2: the blue mineral azurite, and the pigment "blue verditer" or "mountain blue" Lapis armenus, a precious stone, a basic copper carbonate from Armenia Marklite, a hydrated copper carbonate mineral Copper (I) compounds Copper(I) carbonate, Cu2CO3
{ "page_id": 1313663, "source": null, "title": "Copper carbonate" }
In organic chemistry, amine value is a measure of the nitrogen content of an organic molecule. Specifically, it is usually used to measure the amine content of amine functional compounds. It may be defined as the number of milligrams of potassium hydroxide (KOH) equivalent to one gram of epoxy hardener resin. The units are thus mg KOH/g. == List of ASTM methods == There are a number of ASTM analytical test methods to determine amine value. A number of states in the United States have adopted their own test methods but they are based on ASTM methods. Although there are similarities with the method it is not the same as an acid value. ASTM D2073 - This is a potentiometric method. ASTM D2074-07 ASTM D2896 - potentiometric method with perchloric acid. ASTM D6979-03 == First principles == The amine value is useful in helping determine the correct stoichiometry of a two component amine cure epoxy resin system. It is the number of Nitrogens x 56.1 (Mwt of KOH) x 1000 (convert to milligrams) divided by molecular mass of the amine functional compound. So using Tetraethylenepentamine (TEPA) as an example: Mwt = 189, number of nitrogen atoms = 5 So 5 x 1000 x 56.1/189 = 1484. So the Amine Value of TEPA = 1484 === Other amines === All numbers are in units of mg KOH/g. Ethylenediamine. Amine value = 1870 Diethylenetriamine. Amine value = 1634 Triethylenetetramine. Amine value = 1537 Aminoethylpiperazine. Amine value = 1305 Isophorone diamine. Amine value = 660 Hexamethylenediamine. Amine value = 967 1,2-Diaminocyclohexane. Amine value = 984 1,3-BAC. Amine value = 790 2-Methylpentamethylenediamine -Dytek A. Amine value = 967 m-Xylylenediamine -MXDA. Amine value = 825 == See also-related test methods == Acid value Bromine number Epoxy value Hydroxyl value Iodine value Peroxide value Saponification value
{ "page_id": 68291457, "source": null, "title": "Amine value" }
== References == == Further reading == "Amines | Introduction to Chemistry". courses.lumenlearning.com. Retrieved 2021-07-22. Epoxy resin technology. Paul F. Bruins, Polytechnic Institute of Brooklyn. New York: Interscience Publishers. 1968. ISBN 0-470-11390-1. OCLC 182890.{{cite book}}: CS1 maint: others (link) Flick, Ernest W. (1993). Epoxy resins, curing agents, compounds, and modifiers : an industrial guide. Park Ridge, NJ. ISBN 978-0-8155-1708-5. OCLC 915134542.{{cite book}}: CS1 maint: location missing publisher (link) Lee, Henry (1967). Handbook of epoxy resins. Kris Neville ([2nd, expanded work] ed.). New York: McGraw-Hill. ISBN 0-07-036997-6. OCLC 311631322. == External links == The chemistry of epoxide Synthesis of amines
{ "page_id": 68291457, "source": null, "title": "Amine value" }
Hot potassium carbonate, HPC, is a method used to remove carbon dioxide from gas mixtures, in some contexts referred to as carbon scrubbing. The inorganic, basic compound potassium carbonate is mixed with a gas mixture and the liquid absorbs carbon dioxide through chemical processes. The technology is a form of chemical absorption, and was developed for natural gas sweetening (i.e., removal of acidic from raw natural gas). Currently it is also considered, among others, as a post-combustion capture process, in the contexts of carbon capture and storage and carbon capture and utilization. As a post-combustion CO2 capture process, the technology is planned to be used on full scale on a heat plant in Stockholm from 2025. == References ==
{ "page_id": 60623745, "source": null, "title": "Hot potassium carbonate" }
In physics and chemistry, a selection rule, or transition rule, formally constrains the possible transitions of a system from one quantum state to another. Selection rules have been derived for electromagnetic transitions in molecules, in atoms, in atomic nuclei, and so on. The selection rules may differ according to the technique used to observe the transition. The selection rule also plays a role in chemical reactions, where some are formally spin-forbidden reactions, that is, reactions where the spin state changes at least once from reactants to products. In the following, mainly atomic and molecular transitions are considered. == Overview == In quantum mechanics the basis for a spectroscopic selection rule is the value of the transition moment integral m 1 , 2 = ∫ ψ 1 ∗ μ ψ 2 d τ , {\displaystyle m_{1,2}=\int \psi _{1}^{*}\,\mu \,\psi _{2}\,\mathrm {d} \tau ,} where ψ 1 {\displaystyle \psi _{1}} and ψ 2 {\displaystyle \psi _{2}} are the wave functions of the two states, "state 1" and "state 2", involved in the transition, and μ is the transition moment operator. This integral represents the propagator (and thus the probability) of the transition between states 1 and 2; if the value of this integral is zero then the transition is "forbidden". In practice, to determine a selection rule the integral itself does not need to be calculated: It is sufficient to determine the symmetry of the transition moment function ψ 1 ∗ μ ψ 2 . {\displaystyle \psi _{1}^{*}\,\mu \,\psi _{2}.} If the transition moment function is symmetric over all of the totally symmetric representation of the point group to which the atom or molecule belongs, then the integral's value is (in general) not zero and the transition is allowed. Otherwise, the transition is "forbidden". The transition moment integral is zero if the
{ "page_id": 1313664, "source": null, "title": "Selection rule" }
transition moment function, ψ 1 ∗ μ ψ 2 , {\displaystyle \psi _{1}^{*}\,\mu \,\psi _{2},} is anti-symmetric or odd, i.e. y ( x ) = − y ( − x ) {\displaystyle y(x)=-y(-x)} holds. The symmetry of the transition moment function is the direct product of the parities of its three components. The symmetry characteristics of each component can be obtained from standard character tables. Rules for obtaining the symmetries of a direct product can be found in texts on character tables. == Examples == === Electronic spectra === The Laporte rule is a selection rule formally stated as follows: In a centrosymmetric environment, transitions between like atomic orbitals such as s–s, p–p, d–d, or f–f, transitions are forbidden. The Laporte rule (law) applies to electric dipole transitions, so the operator has u symmetry (meaning ungerade, odd). p orbitals also have u symmetry, so the symmetry of the transition moment function is given by the product (formally, the product is taken in the group) u×u×u, which has u symmetry. The transitions are therefore forbidden. Likewise, d orbitals have g symmetry (meaning gerade, even), so the triple product g×u×g also has u symmetry and the transition is forbidden. The wave function of a single electron is the product of a space-dependent wave function and a spin wave function. Spin is directional and can be said to have odd parity. It follows that transitions in which the spin "direction" changes are forbidden. In formal terms, only states with the same total spin quantum number are "spin-allowed". In crystal field theory, d–d transitions that are spin-forbidden are much weaker than spin-allowed transitions. Both can be observed, in spite of the Laporte rule, because the actual transitions are coupled to vibrations that are anti-symmetric and have the same symmetry as the dipole moment operator.
{ "page_id": 1313664, "source": null, "title": "Selection rule" }
=== Vibrational spectra === In vibrational spectroscopy, transitions are observed between different vibrational states. In a fundamental vibration, the molecule is excited from its ground state (v = 0) to the first excited state (v = 1). The symmetry of the ground-state wave function is the same as that of the molecule. It is, therefore, a basis for the totally symmetric representation in the point group of the molecule. It follows that, for a vibrational transition to be allowed, the symmetry of the excited state wave function must be the same as the symmetry of the transition moment operator. In infrared spectroscopy, the transition moment operator transforms as either x and/or y and/or z. The excited state wave function must also transform as at least one of these vectors. In Raman spectroscopy, the operator transforms as one of the second-order terms in the right-most column of the character table, below. The molecule methane, CH4, may be used as an example to illustrate the application of these principles. The molecule is tetrahedral and has Td symmetry. The vibrations of methane span the representations A1 + E + 2T2. Examination of the character table shows that all four vibrations are Raman-active, but only the T2 vibrations can be seen in the infrared spectrum. In the harmonic approximation, it can be shown that overtones are forbidden in both infrared and Raman spectra. However, when anharmonicity is taken into account, the transitions are weakly allowed. In Raman and infrared spectroscopy, the selection rules predict certain vibrational modes to have zero intensities in the Raman and/or the IR. Displacements from the ideal structure can result in relaxation of the selection rules and appearance of these unexpected phonon modes in the spectra. Therefore, the appearance of new modes in the spectra can be a useful indicator
{ "page_id": 1313664, "source": null, "title": "Selection rule" }
of symmetry breakdown. === Rotational spectra === The selection rule for rotational transitions, derived from the symmetries of the rotational wave functions in a rigid rotor, is ΔJ = ±1, where J is a rotational quantum number. === Coupled transitions === There are many types of coupled transition such as are observed in vibration–rotation spectra. The excited-state wave function is the product of two wave functions such as vibrational and rotational. The general principle is that the symmetry of the excited state is obtained as the direct product of the symmetries of the component wave functions. In rovibronic transitions, the excited states involve three wave functions. The infrared spectrum of hydrogen chloride gas shows rotational fine structure superimposed on the vibrational spectrum. This is typical of the infrared spectra of heteronuclear diatomic molecules. It shows the so-called P and R branches. The Q branch, located at the vibration frequency, is absent. Symmetric top molecules display the Q branch. This follows from the application of selection rules. Resonance Raman spectroscopy involves a kind of vibronic coupling. It results in much-increased intensity of fundamental and overtone transitions as the vibrations "steal" intensity from an allowed electronic transition. In spite of appearances, the selection rules are the same as in Raman spectroscopy. === Angular momentum === In general, electric (charge) radiation or magnetic (current, magnetic moment) radiation can be classified into multipoles Eλ (electric) or Mλ (magnetic) of order 2λ, e.g., E1 for electric dipole, E2 for quadrupole, or E3 for octupole. In transitions where the change in angular momentum between the initial and final states makes several multipole radiations possible, usually the lowest-order multipoles are overwhelmingly more likely, and dominate the transition. The emitted particle carries away angular momentum, with quantum number λ, which for the photon must be at least 1,
{ "page_id": 1313664, "source": null, "title": "Selection rule" }
since it is a vector particle (i.e., it has JP = 1−). Thus, there is no radiation from E0 (electric monopoles) or M0 (magnetic monopoles, which do not seem to exist). Since the total angular momentum has to be conserved during the transition, we have that J i = J f + λ , {\displaystyle \mathbf {J} _{\text{i}}=\mathbf {J} _{\text{f}}+{\boldsymbol {\lambda }},} where ‖ λ ‖ = λ ( λ + 1 ) ℏ , {\displaystyle \|{\boldsymbol {\lambda }}\|={\sqrt {\lambda (\lambda +1)}}\,\hbar ,} and its z projection is given by λ z = μ ℏ ; {\displaystyle \lambda _{z}=\mu \hbar ;} and where J i {\displaystyle \mathbf {J} _{\text{i}}} and J f {\displaystyle \mathbf {J} _{\text{f}}} are, respectively, the initial and final angular momenta of the atom. The corresponding quantum numbers λ and μ (z-axis angular momentum) must satisfy | J i − J f | ≤ λ ≤ J i + J f {\displaystyle |J_{\text{i}}-J_{\text{f}}|\leq \lambda \leq J_{\text{i}}+J_{\text{f}}} and μ = M i − M f . {\displaystyle \mu =M_{\text{i}}-M_{\text{f}}.} Parity is also preserved. For electric multipole transitions π ( E λ ) = π i π f = ( − 1 ) λ , {\displaystyle \pi (\mathrm {E} \lambda )=\pi _{\text{i}}\pi _{\text{f}}=(-1)^{\lambda },} while for magnetic multipoles π ( M λ ) = π i π f = ( − 1 ) λ + 1 . {\displaystyle \pi (\mathrm {M} \lambda )=\pi _{\text{i}}\pi _{\text{f}}=(-1)^{\lambda +1}.} Thus, parity does not change for E-even or M-odd multipoles, while it changes for E-odd or M-even multipoles. These considerations generate different sets of transitions rules depending on the multipole order and type. The expression forbidden transitions is often used, but this does not mean that these transitions cannot occur, only that they are electric-dipole-forbidden. These transitions are perfectly possible; they merely occur
{ "page_id": 1313664, "source": null, "title": "Selection rule" }
at a lower rate. If the rate for an E1 transition is non-zero, the transition is said to be permitted; if it is zero, then M1, E2, etc. transitions can still produce radiation, albeit with much lower transitions rates. The transition rate decreases by a factor of about 1000 from one multipole to the next one, so the lowest multipole transitions are most likely to occur. Semi-forbidden transitions (resulting in so-called intercombination lines) are electric dipole (E1) transitions for which the selection rule that the spin does not change is violated. This is a result of the failure of LS coupling. ==== Summary table ==== J = L + S {\displaystyle J=L+S} is the total angular momentum, L {\displaystyle L} is the azimuthal quantum number, S {\displaystyle S} is the spin quantum number, and M J {\displaystyle M_{J}} is the secondary total angular momentum quantum number. Which transitions are allowed is based on the hydrogen-like atom. The symbol ↮ {\displaystyle \not \leftrightarrow } is used to indicate a forbidden transition. In hyperfine structure, the total angular momentum of the atom is F = I + J , {\displaystyle F=I+J,} where I {\displaystyle I} is the nuclear spin angular momentum and J {\displaystyle J} is the total angular momentum of the electron(s). Since F = I + J {\displaystyle F=I+J} has a similar mathematical form as J = L + S , {\displaystyle J=L+S,} it obeys a selection rule table similar to the table above. === Surface === In surface vibrational spectroscopy, the surface selection rule is applied to identify the peaks observed in vibrational spectra. When a molecule is adsorbed on a substrate, the molecule induces opposite image charges in the substrate. The dipole moment of the molecule and the image charges perpendicular to the surface reinforce each other. In
{ "page_id": 1313664, "source": null, "title": "Selection rule" }
contrast, the dipole moments of the molecule and the image charges parallel to the surface cancel out. Therefore, only molecular vibrational peaks giving rise to a dynamic dipole moment perpendicular to the surface will be observed in the vibrational spectrum. == See also == Superselection rule Spin-forbidden reactions Singlet fission == Notes == == References == Harris, D. C.; Bertolucci, M. D. (1978). Symmetry and Spectroscopy. Oxford University Press. ISBN 0-19-855152-5. Cotton, F. A. (1990). Chemical Applications of Group Theory (3rd ed.). Wiley. ISBN 978-0-471-51094-9. == Further reading == Stanton, L. (1973). "Selection rules for pure rotation and vibration-rotation hyper-Raman spectra". Journal of Raman Spectroscopy. 1 (1): 53–70. Bibcode:1973JRSp....1...53S. doi:10.1002/jrs.1250010105. Bower, D. I.; Maddams, W. F. (1989). "Section 4.1.5: Selection rules for Raman activity". The vibrational spectroscopy of polymers. Cambridge University Press. ISBN 0-521-24633-4. Sherwood, P. M. A. (1972). "Chapter 4: The interaction of radiation with a crystal". Vibrational Spectroscopy of Solids. Cambridge University Press. ISBN 0-521-08482-2. == External links == National Institute of Standards and Technology Lecture notes from The University of Sheffield
{ "page_id": 1313664, "source": null, "title": "Selection rule" }
In biology, a taxon (back-formation from taxonomy; pl.: taxa) is a group of one or more populations of an organism or organisms seen by taxonomists to form a unit. Although neither is required, a taxon is usually known by a particular name and given a particular ranking, especially if and when it is accepted or becomes established. It is very common, however, for taxonomists to remain at odds over what belongs to a taxon and the criteria used for inclusion, especially in the context of rank-based ("Linnaean") nomenclature (much less so under phylogenetic nomenclature). If a taxon is given a formal scientific name, its use is then governed by one of the nomenclature codes specifying which scientific name is correct for a particular grouping. Initial attempts at classifying and ordering organisms (plants and animals) were presumably set forth in prehistoric times by hunter-gatherers, as suggested by the fairly sophisticated folk taxonomies. Much later, Aristotle, and later still, European scientists, like Magnol, Tournefort and Carl Linnaeus's system in Systema Naturae, 10th edition (1758),, as well as an unpublished work by Bernard and Antoine Laurent de Jussieu, contributed to this field. The idea of a unit-based system of biological classification was first made widely available in 1805 in the introduction of Jean-Baptiste Lamarck's Flore françoise, and Augustin Pyramus de Candolle's Principes élémentaires de botanique. Lamarck set out a system for the "natural classification" of plants. Since then, systematists continue to construct accurate classifications encompassing the diversity of life; today, a "good" or "useful" taxon is commonly taken to be one that reflects evolutionary relationships. Many modern systematists, such as advocates of phylogenetic nomenclature, use cladistic methods that require taxa to be monophyletic (all descendants of some ancestor). Therefore, their basic unit, the clade, is equivalent to the taxon, assuming that taxa should
{ "page_id": 199556, "source": null, "title": "Taxon" }
reflect evolutionary relationships. Similarly, among those contemporary taxonomists working with the traditional Linnean (binomial) nomenclature, few propose taxa they know to be paraphyletic. An example of a long-established taxon that is not also a clade is the class Reptilia, the reptiles; birds and mammals are the descendants of animals traditionally classed as reptiles, but neither is included in the Reptilia (birds are traditionally placed in the class Aves, and mammals in the class Mammalia). == History == The term taxon was first used in 1926 by Adolf Meyer-Abich for animal groups, as a back-formation from the word taxonomy; the word taxonomy had been coined a century before from the Greek components τάξις (táxis), meaning "arrangement", and νόμος (nómos), meaning "method". For plants, it was proposed by Herman Johannes Lam in 1948, and it was adopted at the VII International Botanical Congress, held in 1950. == Definition == The glossary of the International Code of Zoological Nomenclature (1999) defines a "taxon, (pl. taxa), n. A taxonomic unit, whether named or not: i.e. a population, or group of populations of organisms which are usually inferred to be phylogenetically related and which have characters in common which differentiate (q.v.) the unit (e.g. a geographic population, a genus, a family, an order) from other such units. A taxon encompasses all included taxa of lower rank (q.v.) and individual organisms. [...]" == Ranks == A taxon can be assigned a taxonomic rank, usually (but not necessarily) when it is given a formal name. "Phylum" applies formally to any biological domain, but traditionally it was always used for animals, whereas "division" was traditionally often used for plants, fungi, etc. A prefix is used to indicate a ranking of lesser importance. The prefix super- indicates a rank above, the prefix sub- indicates a rank below. In zoology,
{ "page_id": 199556, "source": null, "title": "Taxon" }
the prefix infra- indicates a rank below sub-. For instance, among the additional ranks of class are superclass, subclass and infraclass. Rank is relative, and restricted to a particular systematic schema. For example, liverworts have been grouped, in various systems of classification, as a family, order, class, or division (phylum). The use of a narrow set of ranks is challenged by users of cladistics; for example, the mere 10 ranks traditionally used between animal families (governed by the International Code of Zoological Nomenclature (ICZN)) and animal phyla (usually the highest relevant rank in taxonomic work) often cannot adequately represent the evolutionary history as more about a lineage's phylogeny becomes known. In addition, the class rank is quite often not an evolutionary but a phenetic or paraphyletic group and as opposed to those ranks governed by the ICZN (family-level, genus-level and species-level taxa), can usually not be made monophyletic by exchanging the taxa contained therein. This has given rise to phylogenetic taxonomy and the ongoing development of the PhyloCode, which has been proposed as a new alternative to replace Linnean classification and govern the application of names to clades. Many cladists do not see any need to depart from traditional nomenclature as governed by the ICZN, International Code of Nomenclature for algae, fungi, and plants, etc. == See also == ABCD Schema Alpha taxonomy Chresonym Cladistics Folk taxonomy Ichnotaxon International Code of Nomenclature for algae, fungi, and plants International Code of Nomenclature of Prokaryotes International Code of Phylogenetic Nomenclature International Code of Zoological Nomenclature (ICZN) List of taxa named by anagrams Rank (botany) Rank (zoology) Segregate (taxonomy) Virus classification Wastebasket taxon == Notes == == References == == External links == The dictionary definition of taxon at Wiktionary
{ "page_id": 199556, "source": null, "title": "Taxon" }
Physics of Plasmas is a peer-reviewed monthly scientific journal on plasma physics published by the American Institute of Physics, with cooperation by the American Physical Society's Division of Plasma Physics, since 1994. Until 1988, the journal topic was covered by Physics of Fluids. From 1989 until 1993, Physics of Fluids was split into Physics of Fluids A covering fluid dynamics and Physics of Fluids B dedicated to plasma physics. In 1994, Physics of Plasmas was split off as a separate journal. == External links == Official website
{ "page_id": 32246658, "source": null, "title": "Physics of Plasmas" }
The Sheth–Tormen approximation is a halo mass function. == Background == The Sheth–Tormen approximation extends the Press–Schechter formalism by assuming that halos are not necessarily spherical, but merely elliptical. The distribution of the density fluctuation is as follows: f ( σ r ) = A 2 a π [ 1 + ( σ r 2 a δ c 2 ) 0.3 ] δ c σ r exp ⁡ ( − a δ c 2 2 σ r 2 ) {\displaystyle f(\sigma _{r})=A{\sqrt {\frac {2a}{\pi }}}[1+({\frac {\sigma _{r}^{2}}{a\delta _{c}^{2}}})^{0.3}]{\frac {\delta _{c}}{\sigma _{r}}}\exp(-{\frac {a\delta _{c}^{2}}{2\sigma _{r}^{2}}})} , where δ c = 1.686 {\displaystyle \delta _{c}=1.686} , a = 0.707 {\displaystyle a=0.707} , and A = 0.3222 {\displaystyle A=0.3222} . The parameters were empirically obtained from the five-year release of WMAP. == Discrepancies with simulations == In 2010, the Bolshoi cosmological simulation predicted that the Sheth–Tormen approximation is inaccurate for the most distant objects. Specifically, the Sheth–Tormen approximation overpredicts the abundance of haloes by a factor of 10 {\displaystyle 10} for objects with a redshift z > 10 {\displaystyle z>10} , but is accurate at low redshifts. == References ==
{ "page_id": 70912899, "source": null, "title": "Sheth–Tormen approximation" }
In chemistry, an alkali ( ; from the Arabic word al-qāly, القالِي) is a basic salt of an alkali metal or an alkaline earth metal. An alkali can also be defined as a base that dissolves in water. A solution of a soluble base has a pH greater than 7.0. The adjective alkaline, and less often, alkalescent, is commonly used in English as a synonym for basic, especially for bases soluble in water. This broad use of the term is likely to have come about because alkalis were the first bases known to obey the Arrhenius definition of a base, and they are still among the most common bases. == Etymology == The word alkali is derived from Arabic al qalīy (or alkali), meaning 'the calcined ashes' (see calcination), referring to the original source of alkaline substances. A water-extract of burned plant ashes, called potash and composed mostly of potassium carbonate, was mildly basic. After heating this substance with calcium hydroxide (slaked lime), a far more strongly basic substance known as caustic potash (potassium hydroxide) was produced. Caustic potash was traditionally used in conjunction with animal fats to produce soft soaps, one of the caustic processes that rendered soaps from fats in the process of saponification, one known since antiquity. Plant potash lent the name to the element potassium, which was first derived from caustic potash, and also gave potassium its chemical symbol K (from the German name Kalium), which ultimately derived from alkali. == Common properties of alkalis and bases == Alkalis are all Arrhenius bases, ones which form hydroxide ions (OH−) when dissolved in water. Common properties of alkaline aqueous solutions include: Moderately concentrated solutions (over 10−3 M) have a pH of 10 or greater. This means that they will turn phenolphthalein from colorless to pink. Concentrated solutions
{ "page_id": 2955, "source": null, "title": "Alkali" }
are caustic (causing chemical burns). Alkaline solutions are slippery or soapy to the touch, due to the saponification of the fatty substances on the surface of the skin. Alkalis are normally water-soluble, although some like barium carbonate are only soluble when reacting with an acidic aqueous solution. == Difference between alkali and base == The terms "base" and "alkali" are often used interchangeably, particularly outside the context of chemistry and chemical engineering. There are various, more specific definitions for the concept of an alkali. Alkalis are usually defined as a subset of the bases. One of two subsets is commonly chosen. A basic salt of an alkali metal or alkaline earth metal (this includes Mg(OH)2 (magnesium hydroxide) but excludes NH3 (ammonia)). Any base that is soluble in water and forms hydroxide ions or the solution of a base in water. (This includes both Mg(OH)2 and NH3, which forms NH4OH.) The second subset of bases is also called an "Arrhenius base". == Alkali salts == Alkali salts are soluble hydroxides of alkali metals and alkaline earth metals, of which common examples are: Sodium hydroxide (NaOH) – often called "caustic soda" Potassium hydroxide (KOH) – commonly called "caustic potash" Lye – generic term for either of two previous salts or their mixture Calcium hydroxide (Ca(OH)2) – saturated solution known as "limewater" Magnesium hydroxide (Mg(OH)2) – an atypical alkali since it has low solubility in water (although the dissolved portion is considered a strong base due to complete dissociation of its ions) == Alkaline soil == Soils with pH values that are higher than 7.3 are usually defined as being alkaline. These soils can occur naturally due to the presence of alkali salts. Although many plants do prefer slightly basic soil (including vegetables like cabbage and fodder like buffalo grass), most plants prefer
{ "page_id": 2955, "source": null, "title": "Alkali" }
mildly acidic soil (with pHs between 6.0 and 6.8), and alkaline soils can cause problems. == Alkali lakes == In alkali lakes (also called soda lakes), evaporation concentrates the naturally occurring carbonate salts, giving rise to an alkalic and often saline lake. Examples of alkali lakes: Alkali Lake, Lake County, Oregon Baldwin Lake, San Bernardino County, California Bear Lake on the Utah–Idaho border Lake Magadi in Kenya Lake Turkana in Kenya Mono Lake, near Owens Valley in California Redberry Lake, Saskatchewan Summer Lake, Lake County, Oregon Tramping Lake, Saskatchewan == See also == Alkali manufacture Alkali metals Alkaline earth metals Alkaline magma series Base (chemistry) == References ==
{ "page_id": 2955, "source": null, "title": "Alkali" }
Aquatic plant management involves the science and methodologies used to control invasive and non-invasive aquatic plant species in waterways. Methods used include spraying herbicide, biological controls, mechanical removal as well as habitat modification. Preventing the introduction of invasive species is ideal. Aquaculture has been a source of exotic and ultimately invasive species introductions such Oreochromis niloticus. Aquatic plants released from home fish tanks have also been an issue. == Impact == Aquatic weeds are obviously most economically problematic where humans and water touch each other. Water weeds reduce our capacity for hydroelectric generation, drinking water supply, industrial water supply, agricultural water supply, and recreational use of water bodies including recreational boating. Some weeds do this by increasing - rather than decreasing - the evaporation loss at the surface. Particular weeds and aquatic insects have a special relationship which makes the plants a source of insect pests. == Organizations == In Florida the Florida Fish and Wildlife Conservation Commission (FWC) has an aquatic plant management section. The State of Washington has an Aquatic Plant Management Program. The Aquatic Plant Management Society is an organization in the U.S. and published the Journal of Aquatic Plant Management. The City of Winter Park, Florida has a herbicide program. == Species == Invasive aquatic species include: Eichhornia crassipes (water hyacinth), invasive outside its native habitat in the Amazon Basin Hydrilla, invasive in North America Limnobium laevigatum, invasive in the U.S. Myriophyllum spicatum, invasive in North America Myriophyllum verticillatum, invasive in North America Monochoria vaginalis, invasive outside its native habitat in Asia and the Pacific Pistia Salvinia molesta == Aquatic plant harvesting methods == Harvesting methods Harvesting refers to anthropogenic removal of aquatic plants from their environment. Aquatic plant harvesting is often done to clear waters for navigation and recreation, as well as for the purpose
{ "page_id": 60099468, "source": null, "title": "Aquatic plant management" }
of ridding the environment of invasive plant species. However, this aquatic plant management style can also have negative effects on the environment such as harming non-target plants and animals, increasing turbidity, and potentially spreading invasive plants via fragmentation. There are multiple plant removal methods available depending on the purpose of removal, the habitat of the plant, the animals surrounding the plants, as well as the density, access point, and species of the plant. Plant removal methods consist of: pulling by hand, mechanical cutting, cut and grinding, suction harvesting, rototilling, and hydro-raking. Mechanical cutting is the most common method of aquatic plant harvesting. This is an efficient method that can cover a large area. Removing large amounts of plants from the water can have a positive impact on the daily oxygen levels in shallow aquatic environments. Mechanical cutting has short term effect, which makes it a good method to use with the purpose of harvesting nutrients and promoting regrowth of the plants. However, the equipment used for cutting is expensive, and this method is also nonselective, often damaging non-target plants, habitats, and animals. This method of harvesting has a tendency to remove large portions of macroinvertebrate, semi-aquatic vertebrate, and fish populations. Cutting also allows the possibility of further spreading plants that reproduce via fragmentation. Mechanical cutting is commonly used in heavily infested areas because of its speed and efficiency, however this leaves behind large amounts of dead plants free floating in the environment. Leaving large mats of cut plants in the water can have negative effects on the aquatic environment by providing obstacles for animals, reducing of sunlight for remaining plants, creating build up on shore lines, and poor water quality. Cutting is often performed using harvesters with a sickle-bar cutting blade on the back. Mechanical cutting is often paired with
{ "page_id": 60099468, "source": null, "title": "Aquatic plant management" }
harvesting boats to collect the dead plants, or have a conveyor belt to load cut plants onto the boat. The cut and grind method is a highly efficient method of harvesting with the disposal of dead plants included. This method also mechanically cuts large amounts of plants at a time then proceeds to grind the plants to dispose back into the lake. This method is best for bodies of water with chronic invasive plant problems in which plant disposal must be considered. Grinding plants minimizes the need for any extra boats or disposal methods to manage the cut plants. However, this method contains the same downfalls of mechanical cutting. It is a nonselective, short term solution that can resuspend sediment. Although different from standard cutting, the grinding of the plants still leaves large masses of plant material in the water creating negative effects in the remaining environment. The Rotovating method uses rotating blades to uproot plants from the sediment. Rotovating is more likely to remove the entire plant, including the roots, with an intermediate-term effect on regrowth. This method is effective but requires expensive equipment and has negative effects on the environment. Rotovating is nonselective, and it may spread plants via fragmentation and suspend excess amounts of sediment. Rotovating is an efficient process but requires a separate disposal method. Hydro-raking works similarly to rotovating. A backhoe is used to target the roots and rip the plant out of the sediment, followed by a rake to remove the vegetation. This method works best for thick, difficult plants to remove, and is effective for long-term removal since roots are removed. Hydro-rakings holds the same challenges as rotovating, with the potential to indirectly spread species, damage more plants than necessary, and create turbidity by suspending sediment. Pulling by hand or suction harvesting are
{ "page_id": 60099468, "source": null, "title": "Aquatic plant management" }
diver/snorkeler operated, highly selective methods of removing aquatic plants. Individuals manually pull or vacuum suck the entire plant from the sediment. Vacuum suction removes the entire plant (stem, leaves, roots) including the surrounding sediment from the floor of the aquatic environment. This provides a long-term effect with minimal regrowth of the plants. Manual removal is a slow, inefficient process that is often only performed on small vegetative communities in underdeveloped areas. Suction harvesting requires more technology and is more expensive. Pulling by hand is more cost effective; however, pulling by hand runs the risk of suspending excess sediment, suction harvesting does not have this risk. == See also == Aquatic ecosystem GIS and aquatic science == References == == Further reading == A Guide to Aquatic Plants: Identification & Management by David F. Fink, Ecological Services Section, Minnesota Department of Natural Resources, 1997 Aquatic weeds: the ecology and management of nuisance aquatic vegetation by A. H. Pieterse, Kevin J. Murphy, Oxford University Press, Aug 9, 1990 == External links == Identifying and Managing Aquatic Plants from the Purdue Extension office
{ "page_id": 60099468, "source": null, "title": "Aquatic plant management" }
The molecular formula C26H42O3 (molar mass: 402.61 g/mol, exact mass: 402.3134 u) may refer to: Androstanolone enanthate (DHTH), also known as stanolone enanthate CP 55,244
{ "page_id": 61016973, "source": null, "title": "C26H42O3" }
Cyclodipeptide synthases (CDPSs) are a newly defined family of peptide-bond forming enzymes that are responsible for the ribosome-independent biosynthesis of various cyclodipeptides, which are the precursors of many natural products with important biological activities. As a substrate for this synthesis, CDPSs use two amino acids activated as aminoacyl-tRNAs (aa-tRNAs), therefore diverting them from the ribosomal machinery. The first member of this family was identified in 2002 during the characterization of the albonoursin biosynthetic pathway in Streptomyces noursei. CDPSs are present in bacteria, fungi, and animal cells. == History and research == From 2002, when the first description of a CDPSs was done, until now, the number of reported CDPSs in databases has experienced a significant growth (800 in June 2017). It is probable that these cyclopeptides are implicated in numerous biosynthetic pathways. However, their products’ diversity has not been very explored. The activity of 32 new CPDS has been described. This fact raises the number of experimentally characterized CDPS up to 100 (approximately). Moreover, this research has identified several consensus sequences associated to the formation of a specific cyclodipeptide, enhancing the predictive model of specificity of CDPS. This improved prediction method facilitates the deciphering of independent ways of CDPS. == Structure == CDPSs don’t have a specific structure, given each one has its own specific function, but they still have common architectures, such as a Rossmann-fold domain. CDPSs are monomers that have been found to display a strong structural similarity to the catalytic domains of class Ic aminoacyl tRNA synthetases: both these families, CDPSs and class Ic aaRSs, have a Rossmann-fold domain and their structures can be superimposed showing many structural analogies. CDPSs characteristically feature a deep surface-accessible pocket bordered by the catalytic residues, which is where the catalysis of amide bond formation takes place. This structure is positioned similarly
{ "page_id": 65735565, "source": null, "title": "Cyclodipeptide synthases" }
to the aminoacyl binding pocket in aaRSs, which leads to thinking that CDPSs evolved from class Ic aaRSs. CDPSs and aaRSs present substantial differences though, such as the absence of ATP-binding motifs in CDPSs, given that these use, unlike aaRSs, amino acids that have already been activated. == Catabolic reaction: How do CDPSs synthesize cyclodipeptides? == It was firstly thought that nonribosomal peptide synthetases (NRPSs) were the responsible ones of CDPs construction, either through specific biosynthetic pathways or with the premature liberation of dipeptidyl intermediates meanwhile the elongation process was done. On account of AlbC discovery, an enzyme with the ability to specifically create CDP using loaded ARNt as substrates, it was disclosed that there was a second route for the cyclodipeptide production. CDPSs' catalytic cycle begins with the binding of the first aa-tRNA, with its aminoacyl transferred onto a conserved serine residue to form an aminoacyl-enzyme intermediate. The second aa-tRNA interacts with this intermediate so that its aminoacyl is transferred to the aminoacyl-enzyme to form a dipeptidyl-enzyme intermediate. Finally, the dipeptidyl goes through an intramolecular cyclization leading to the final cyclodipeptide. == Classification == CDPSs can be divided into two distinct subfamilies named NYH and XYP, distinguished depending on the conserved residues within their respective active sites, which let experts predict their aminoacyl-tRNA substrates. Both subfamilies mainly differ in the first half of their Rossmann fold, this two structures correspond to two different structural solutions to facilitate the reactivity of the catalytic serine residue. Some NYH’s crystal structures have been identified. These CDPSs’ structure contain a Rossmann fold domain. NYH form a larger group than XYP, therefore there is more information about them than about the XYP subfamily. == Biosynthesis == CDPS-encoding genes are found in genomic locations with genes encoding additional biosynthetic enzymes (CDPS DmtB1 is an example,
{ "page_id": 65735565, "source": null, "title": "Cyclodipeptide synthases" }
encoded by the gene of dmt1 locus). These additional biosynthetic enzymes are for example: oxidoreductases, prenyltransferases, methyltransferases, or cyclases and some proteins as cytochrome P450s. == Applications == Recently bioinformatics are designing a way to predict CDPSs products to understand better how their catalytic process works. Moreover, research has brought to light a lot of chemical information about CDPSs pathways. Different projects can also create chemical diversity. The importance of the cyclodipeptides production has attracted immense attention because of their properties, not only as antifungal or antibacterial but also as a biological target. That is why an important part of the pharmaceutical products contain CDPs. == See also == Cyclic peptide Aminoacyl tRNA synthetase == Notes == In elongation reference, go for "Part of transcription of DNA into mRNA". == References ==
{ "page_id": 65735565, "source": null, "title": "Cyclodipeptide synthases" }
Aspects of genetics including mutation, hybridisation, cloning, genetic engineering, and eugenics have appeared in fiction since the 19th century. Genetics is a young science, having started in 1900 with the rediscovery of Gregor Mendel's study on the inheritance of traits in pea plants. During the 20th century it developed to create new sciences and technologies including molecular biology, DNA sequencing, cloning, and genetic engineering. The ethical implications were brought into focus with the eugenics movement. Since then, many science fiction novels and films have used aspects of genetics as plot devices, often taking one of two routes: a genetic accident with disastrous consequences; or, the feasibility and desirability of a planned genetic alteration. The treatment of science in these stories has been uneven and often unrealistic. The film Gattaca did attempt to portray science accurately but was criticised by scientists. == Background == Modern genetics began with the work of the monk Gregor Mendel in the 19th century, on the inheritance of traits in pea plants. Mendel found that visible traits, such as whether peas were round or wrinkled, were inherited discretely, rather than by blending the attributes of the two parents. In 1900, Hugo de Vries and other scientists rediscovered Mendel's research; William Bateson coined the term "genetics" for the new science, which soon investigated a wide range of phenomena including mutation (inherited changes caused by damage to the genetic material), genetic linkage (when some traits are to some extent inherited together), and hybridisation (crosses of different species). Eugenics, the production of better human beings by selective breeding, was named and advocated by Charles Darwin's cousin, the scientist Francis Galton, in 1883. It had both a positive aspect, the breeding of more children with high intelligence and good health; and a negative aspect, aiming to suppress "race degeneration" by
{ "page_id": 57936783, "source": null, "title": "Genetics in fiction" }
preventing supposedly "defective" families with attributes such as profligacy, laziness, immoral behaviour and a tendency to criminality from having children. Molecular biology, the interactions and regulation of genetic materials, began with the identification in 1944 of DNA as the main genetic material; the genetic code and the double helix structure of DNA was determined by James Watson and Francis Crick in 1953. DNA sequencing, the identification of an exact sequence of genetic information in an organism, was developed in 1977 by Frederick Sanger. Genetic engineering, the modification of the genetic material of a live organism, became possible in 1972 when Paul Berg created the first recombinant DNA molecules (artificially assembled genetic material) using viruses. Cloning, the production of genetically identical organisms from some chosen starting point, was shown to be practicable in a mammal with the creation of Dolly the sheep from an ordinary body cell in 1996 at the Roslin Institute. == Genetics themes == === Mutants and hybrids === Mutation and hybridisation are widely used in fiction, starting in the 19th century with science fiction works such as Mary Shelley's 1818 novel Frankenstein and H. G. Wells's 1896 The Island of Dr. Moreau. In her 1977 Biological Themes in Modern Science Fiction, Helen Parker identified two major types of story: "genetic accident", the uncontrolled, unexpected and disastrous alteration of a species; and "planned genetic alteration", whether controlled by humans or aliens, and the question of whether that would be either feasible or desirable. In science fiction up to the 1970s, the genetic changes were brought about by radiation, breeding programmes, or manipulation with chemicals or surgery (and thus, notes Lars Schmeink, not necessarily by strictly genetic means). Examples include The Island of Dr. Moreau with its horrible manipulations; Aldous Huxley's 1932 Brave New World with a breeding programme;
{ "page_id": 57936783, "source": null, "title": "Genetics in fiction" }
and John Taine's 1951 Seeds of Life, using radiation to create supermen. After the discovery of the double helix and then recombinant DNA, genetic engineering became the focus for genetics in fiction, as in books like Brian Stableford's tale of a genetically modified society in his 1998 Inherit the Earth, or Michael Marshall Smith's story of Organ farming in his 1997 Spares. Comic books have imagined mutated superhumans with extraordinary powers. The DC Universe (from 1939) imagines "metahumans"; the Marvel Universe (from 1961) calls them "mutants", while the Wildstorm (from 1992) and Ultimate Marvel (2000–2015) Universes name them "posthumans". Stan Lee introduced the concept of mutants in the Marvel X-Men books in 1963; the villain Magneto declares his plan to "make Homo sapiens bow to Homo superior!", implying that mutants will be an evolutionary step up from current humanity. Later, the books speak of an X-gene that confers powers from puberty onwards. X-men powers include telepathy, telekinesis, healing, strength, flight, time travel, and the ability to emit blasts of energy. Marvel's god-like Celestials are later (1999) said to have visited Earth long ago and to have modified human DNA to enable mutant powers. James Blish's 1952 novel Titan's Daughter (in Kendell Foster Crossen's Future Tense collection) featured stimulated polyploidy (giving organisms multiple sets of genetic material, something that can create new species in a single step), based on spontaneous polyploidy in flowering plants, to create humans with more than normal height, strength, and lifespans. === Cloning === Cloning, too, is a familiar plot device. Aldous Huxley's 1931 dystopian novel Brave New World imagines the in vitro cloning of fertilised human eggs. Huxley was influenced by J. B. S. Haldane's 1924 non-fiction book Daedalus; or, Science and the Future, which used the Greek myth of Daedalus to symbolise the coming revolution
{ "page_id": 57936783, "source": null, "title": "Genetics in fiction" }
in genetics; Haldane predicted that humans would control their own evolution through directed mutation and in vitro fertilisation. Cloning was explored further in stories such as Poul Anderson's 1953 UN-Man. In his 1976 novel, The Boys from Brazil, Ira Levin describes the creation of 96 clones of Adolf Hitler, replicating for all of them the rearing of Hitler (including the death of his father at age 13), with the goal of resurrecting Nazism. In his 1990 novel Jurassic Park, Michael Crichton imagined the recovery of the complete genome of a dinosaur from fossil remains, followed by its use to recreate living animals of an extinct species. Cloning is a recurring theme in science fiction films like Jurassic Park (1993), Alien Resurrection (1997), The 6th Day (2000), Resident Evil (2002), Star Wars: Episode II (2002) and The Island (2005). The process of cloning is represented variously in fiction. Many works depict the artificial creation of humans by a method of growing cells from a tissue or DNA sample; the replication may be instantaneous, or take place through slow growth of human embryos in artificial wombs. In the long-running British television series Doctor Who, the Fourth Doctor and his companion Leela were cloned in a matter of seconds from DNA samples ("The Invisible Enemy", 1977) and then—in an apparent homage to the 1966 film Fantastic Voyage—shrunk to microscopic size in order to enter the Doctor's body to combat an alien virus. The clones in this story are short-lived, and can only survive a matter of minutes before they expire. Films such as The Matrix and Star Wars: Episode II – Attack of the Clones have featured human foetuses being cultured on an industrial scale in enormous tanks. Cloning humans from body parts is a common science fiction trope, one of several genetics
{ "page_id": 57936783, "source": null, "title": "Genetics in fiction" }
themes parodied in Woody Allen's 1973 comedy Sleeper, where an attempt is made to clone an assassinated dictator from his disembodied nose. === Genetic engineering === Genetic engineering features in many science fiction stories. Films such as The Island (2005) and Blade Runner (1982) bring the engineered creature to confront the person who created it or the being it was cloned from, a theme seen in some film versions of Frankenstein. Few films have informed audiences about genetic engineering as such, with the exception of the 1978 The Boys from Brazil and the 1993 Jurassic Park, both of which made use of a lesson, a demonstration, and a clip of scientific film. In 1982, Frank Herbert's novel The White Plague described the deliberate use of genetic engineering to create a pathogen which specifically killed women. Another of Herbert's creations, the Dune series of novels, starting with Dune in 1965, emphasises genetics. It combines selective breeding by a powerful sisterhood, the Bene Gesserit, to produce a supernormal male being, the Kwisatz Haderach, with the genetic engineering of the powerful but despised Tleilaxu. === Eugenics === Eugenics plays a central role in films such as Andrew Niccol's 1997 Gattaca, the title alluding to the letters G, A, T, C for guanine, adenine, thymine, and cytosine, the four nucleobases of DNA. Genetic engineering of humans is unrestricted, resulting in genetic discrimination, loss of diversity, and adverse effects on society. The film explores the ethical implications; the production company, Sony Pictures, consulted with a gene therapy researcher, French Anderson, to ensure that the portrayal of science was realistic, and test-screened the film with the Society of Mammalian Cell Biologists and the American National Human Genome Research Institute before its release. This care did not prevent researchers from attacking the film after its release. Philim
{ "page_id": 57936783, "source": null, "title": "Genetics in fiction" }
Yam of Scientific American called it "science bashing"; in Nature Kevin Davies called it a ""surprisingly pedestrian affair"; and the molecular biologist Lee Silver described the film's extreme genetic determinism as "a straw man". == Myth and oversimplification == The geneticist Dan Koboldt observes that while science and technology play major roles in fiction, from fantasy and science fiction to thrillers, the representation of science in both literature and film is often unrealistic. In Koboldt's view, genetics in fiction is frequently oversimplified, and some myths are common and need to be debunked. For example, the Human Genome Project has not (he states) immediately led to a Gattaca world, as the relationship between genotype and phenotype is not straightforward. People do differ genetically, but only very rarely because they are missing a gene that other people have: people have different alleles of the same genes. Eye and hair colour are controlled not by one gene each, but by multiple genes. Mutations do occur, but they are rare: people are 99.99% identical genetically, the 3 million differences between any two people being dwarfed by the hundreds of millions of DNA bases which are identical; nearly all DNA variants are inherited, not acquired afresh by mutation. And, Koboldt writes, believable scientists in fiction should know their knowledge is limited. == See also == Evolution in fiction Category:Fiction about genetic engineering Parasites in fiction == References ==
{ "page_id": 57936783, "source": null, "title": "Genetics in fiction" }
The Luigi G. Napolitano Award is presented every year at the International Astronautical Congress. Luigi Gerardo Napolitano was an engineer, scientist and professor. The award has been presented annually since 1993, to a young scientist, below 30 years of age, who has contributed significantly to the advancement of the aerospace science and has given a paper at the International Astronautical Congress on the contribution. The Luigi G. Napolitano Award is donated by the Napolitano family and it consists of the Napolitano commemorative medal and a certificate of citation, and is presented by the Education Committee of the IAF. The International Academy of Astronautics awards the Luigi Napolitano Book Award annually. == Winners == 1993 Shin-ichi Nishizawa Japan 1994 Ralph D. Lorenz United Kingdom 1995 O.G.Liepack Germany 1996 W. Tang China 1997 G.W.R. Frenken Netherlands 1998 Michael Donald Ingham 1999 Chris Blanksby Australia 2000 Frederic Monnaie France 2001 Noboru Takeichi Japan 2002 Stefano Ferreti Italy 2003 Veronica de Micco Italy 2004 Julie Bellerose United States 2005 Nicola Baggio Italy 2006 Carlo Menon Italy 2007 Paul Williams Australia 2008 Giuseppe Del Gaudio Italy 2009 Daniel Kwom United States 2010 Andrew Flasch Japan 2011 Nishchay Mhatre India 2012 Valerio Carandente Italy 2013 Sreeja Nag United States 2014 Alessandro Golkar Italy 2015 Koki Ho Japan 2016 Melissa Mirino Italy 2017 Akshata Krishnamurthy United States 2018 Peter Z. Schulte United States 2019 Hao Chen China 2020 Elizabeth Barrios United States 2021 Federica Angeletti Italy 2022 Julia Briden United States == See also == List of engineering awards List of physics awards List of space technology awards == External links == International Astronautical Federation Award winners IAA 2020 Award winner 2021 Award winner 2022 Award winner
{ "page_id": 5639053, "source": null, "title": "Luigi G. Napolitano Award" }
Q methodology is a research method used in psychology and in social sciences to study people's "subjectivity"—that is, their viewpoint. Q was developed by psychologist William Stephenson. It has been used both in clinical settings for assessing a patient's progress over time (intra-rater comparison), as well as in research settings to examine how people think about a specific topic (inter-rater comparisons). == Technical overview == The name "Q" comes from the form of factor analysis that is used to analyze the data. Normal factor analysis, called "R method," involves finding correlations between variables (say, height and age) across a sample of subjects. Q, on the other hand, looks for correlations between subjects across a sample of variables. Q factor analysis reduces the many individual viewpoints of the subjects down to a few "factors," which are claimed to represent shared ways of thinking. It is sometimes said that Q factor analysis is R factor analysis with the data table turned sideways. While helpful as a heuristic for understanding Q, this explanation may be misleading, as most Q methodologists argue that for mathematical reasons no one data matrix would be suitable for analysis with both Q and R. The data for Q factor analysis come from a series of "Q sorts" performed by one or more subjects. A Q sort is a ranking of variables—typically presented as statements printed on small cards—according to some "condition of instruction." For example, in a Q study of people's views of a celebrity, a subject might be given statements like "He is a deeply religious man" and "He is a liar," and asked to sort them from "most like how I think about this celebrity" to "least like how I think about this celebrity." The use of ranking, rather than asking subjects to rate their agreement
{ "page_id": 5442449, "source": null, "title": "Q methodology" }
with statements individually, is meant to capture the idea that people think about ideas in relation to other ideas, rather than in isolation. Usually, this ranking is done on a score sheet with the most salient options at the extreme ends of the sheet, such as "most agree" and "most disagree". This score sheet is usually in the form of a bell curve where a respondent can place most Q sorts at the middle and the fewest Q sorts at the far ends of the score sheets. The sample of statements for a Q sort is drawn from and claimed by the researcher to be representative of a "concourse"—the sum of all things people say or think about the issue being investigated. Commonly, Q methodologists use a structured sampling approach in order to try and represent the full breadth of the concourse. One salient difference between Q and other social science research methodologies, such as surveys, is that it typically uses many fewer subjects. This can be a strength, as Q is sometimes used with a single subject, and it makes research far less expensive. In such cases, a person will rank the same set of statements under different conditions of instruction. For example, someone might be given a set of statements about personality traits and then asked to rank them according to how well they describe herself, her ideal self, her father, her mother, etc. Working with a single individual is particularly relevant in the study of how an individual's rankings change over time and this was the first use of Q methodology. As Q methodology works with a small non-representative sample, conclusions are limited to those who participated in the study. In studies of intelligence, Q factor analysis can generate consensus based assessment (CBA) scores as direct measures.
{ "page_id": 5442449, "source": null, "title": "Q methodology" }
Alternatively, the unit of measurement of a person in this context is his factor loading for a Q-sort he or she performs. Factors represent norms with respect to schemata. The individual who gains the highest factor loading on an Operant factor is the person most able to conceive the norm for the factor. What the norm means is a matter, always, for conjecture and refutation (Popper). It may be indicative of the wisest solution, or the most responsible, the most important, or an optimized-balanced solution. These are all untested hypotheses that require future study. An alternative method that determines the similarity among subjects somewhat like Q methodology, as well as the cultural "truth" of the statements used in the test, is Cultural Consensus Theory. The "Q sort" data collection procedure is traditionally done using a paper template and the sample of statements or other stimuli printed on individual cards. However, there are also computer software applications for conducting online Q sorts. For example, UC Riverside's Riverside Situational Q-sort (RSQ), claims to measure the psychological properties of situations. Their International Situations Project is using the tool to explore the psychologically salient aspects of situations and how those aspects may differ across cultures with this university-developed web-based application. To date there has been no study of differences in sorts produced by use of computer based vs. physical sorting. One Q-sort should produce two sets of data. The first is the physical distribution of sorted objects. The second is either an ongoing 'think-out-loud' narrative or a discussion that immediately follows the sorting exercise. The purpose of these narratives were, in the first instance, to elicit discussion of the reasons for particular placements. While the relevance of this qualitative data is often suppressed in current uses of Q-methodology, the modes of reasoning behind placement
{ "page_id": 5442449, "source": null, "title": "Q methodology" }
of an item can be more analytically relevant than the absolute placement of cards. == Application == Q-methodology has been used as a research tool in a wide variety of disciplines including nursing, veterinary medicine, public health, transportation, education, rural sociology, hydrology, mobile communication, and even robotics. The methodology is particularly useful when researchers wish to understand and describe the variety of subjective viewpoints on an issue. == Validation == Some information on validation of the method is available. Furthermore, the issue of validity concerning the Q-method has been discussed variously. However, Lundberg et al. point out that "[s]ince participants’ Q sorts are neither right nor wrong, but constructed through respondents’ rank-ordering of self-referent items, validity in line with quantitative tenets of research is of no concern in Q". == Criticism of Q methodology == In 2013, an article was published under the title "Overly ambitious: contributions and current status of Q methodology" written by Jarl K. Kampen & Peter Tamás. Kampen & Tamás state that "Q methodology neither delivers its promised insight into human subjectivity nor accounts adequately for threats to the validity of the claims it can legitimately make". This in turn makes the method, according to the authors, "inappropriate for its declared purpose". In response to Kampen & Tamás's criticism, Steven R. Brown, Stentor Danielson & Job van Exel published the response article "Overly ambitious critics and the Medici Effect: a reply to Kampen and Tamás". Brown et al. states that since its inception, Q methodology "has been a recurring target of hastily assembled critiques" which has served no other purpose than to misinform other researchers and readers. Due to the amount of criticism towards the Q methodology, Brown et al. gather these criticisms under the term the Medici Effect, named after the famed family that denied
{ "page_id": 5442449, "source": null, "title": "Q methodology" }
Galileo Galilei's evidence while refusing to look through his telescope. Brown et al. continue by responding to certain points of Kampen & Tamás's criticisms: On the nature of subjectivity Concourse and Q samples Factor analysis and interpretation The force Q-sort distribution Items:persons ratio Researcher bias Miscellany One point which the authors point out in section 3 is that Kampen & Tamás seek to claim that "the limits in the number of factors that Q can produce" defies logic because "Q can identify no more factors than there are Q statements". This argument, however, Brown et al. "have never before encountered". The authors continue on this argument by stating that "the data of Q methodology are not responses to individual statements alone, but more importantly in their relationships, as when they are rank-ordered". In their conclusion, Brown et al. point out that, much like Medici's refusal to look through Galileo's telescope, "these critics have failed to engage personally with Q in order to see if their abstract critiques hold up in practice". == See also == Card sorting Factor analysis Group concept mapping Validation and verification Varimax rotation == References ==
{ "page_id": 5442449, "source": null, "title": "Q methodology" }
In chemistry a one-pot synthesis is a strategy to improve the efficiency of a chemical reaction in which a reactant is subjected to successive chemical reactions in just one reactor. This is much desired by chemists because avoiding a lengthy separation process and purification of the intermediate chemical compounds can save time and resources while increasing chemical yield. An example of a one-pot synthesis is the total synthesis of tropinone or the Gassman indole synthesis. Sequential one-pot syntheses can be used to generate even complex targets with multiple stereocentres, such as oseltamivir, which may significantly shorten the number of steps required overall and have important commercial implications. A sequential one-pot synthesis with reagents added to a reactor one at a time and without work-up is also called a telescoping synthesis. In one such procedure the reaction of 3-N-tosylaminophenol I with acrolein II affords a hydroxyl substituted quinoline III through 4 sequential steps without workup of the intermediate products (see image). The addition of acrolein (blue) is a Michael reaction catalyzed by N,N-diisopropylamine, the presence of ethanol converts the aldehyde group to an acetal but this process is reversed when hydrochloric acid is introduced (red). The enolate reacts as an electrophile in a Friedel-Crafts reaction with ring-closure. The alcohol group is eliminated in presence of potassium hydroxide (green) and when in the final step the reaction medium is neutralized to pH 7 (magenta) the tosyl group is eliminated as well. == References ==
{ "page_id": 1706901, "source": null, "title": "One-pot synthesis" }
In molecular biology, gel extraction or gel isolation is a technique used to isolate a desired fragment of intact DNA from an agarose gel following agarose gel electrophoresis. After extraction, fragments of interest can be mixed, precipitated, and enzymatically ligated together in several simple steps. This process, usually performed on plasmids, is the basis for rudimentary genetic engineering. After DNA samples are run on an agarose gel, extraction involves four basic steps: identifying the fragments of interest, isolating the corresponding bands, isolating the DNA from those bands, and removing the accompanying salts and stain. To begin, UV light is shone on the gel in order to illuminate all the ethidium bromide-stained DNA. Care must be taken to avoid exposing the DNA to mutagenic radiation for longer than absolutely necessary. The desired band is identified and physically removed with a cover slip or razor blade. The removed slice of gel should contain the desired DNA inside. An alternative method, utilizing SYBR Safe DNA gel stain and blue-light illumination, avoids the DNA damage associated with ethidium bromide and UV light. Several strategies for isolating and cleaning the DNA fragment of interest exist. == Spin Column Extraction == Gel extraction kits are available from several major biotech manufacturers for a final cost of approximately 1–2 US$ per sample. Protocols included in these kits generally call for the dissolution of the gel-slice in 3 volumes of chaotropic agent at 50 °C, followed by application of the solution to a spin-column (the DNA remains in the column), a 70% ethanol wash (the DNA remains in the column, salt and impurities are washed out), and elution of the DNA in a small volume (30 μL) of water or buffer. == Dialysis == The gel fragment is placed in a dialysis tube that is permeable to fluids
{ "page_id": 2165654, "source": null, "title": "Gel extraction" }
but impermeable to molecules at the size of DNA, thus preventing the DNA from passing through the membrane when soaked in TE buffer. An electric field is established around the tubing (in a way similar to gel electrophoresis) long enough so that the DNA is removed from the gel but remains in the tube. The tube solution can then be pipetted out and will contain the desired DNA with minimal background. == Traditional == The traditional method of gel extraction involves creating a folded pocket of Parafilm wax paper and placing the agarose fragment inside. The agarose is physically compressed with a finger into a corner of the pocket, partially liquifying the gel and its contents. The liquid droplets can then be directed out of the pocket onto an exterior piece of Parafilm, where they are pipetted into a small tube. A butanol extraction removes the ethidium bromide stain, followed by a phenol/chloroform extraction of the cleaned DNA fragment. The disadvantage of gel isolation is that background can only be removed if it can be physically identified using the UV light. If two bands are very close together, it can be hard to separate them without some contamination. In order to clearly identify the band of interest, further restriction digests may be necessary. Restriction sites unique to unwanted bands of similar size can aid in breaking up these potential contaminants. === References ===
{ "page_id": 2165654, "source": null, "title": "Gel extraction" }
The molecular formula C12H16O7 may refer to: α-Arbutin β-Arbutin
{ "page_id": 24185750, "source": null, "title": "C12H16O7" }
Born–von Karman boundary conditions are periodic boundary conditions which impose the restriction that a wave function must be periodic on a certain Bravais lattice. Named after Max Born and Theodore von Kármán, this condition is often applied in solid state physics to model an ideal crystal. Born and von Kármán published a series of articles in 1912 and 1913 that presented one of the first theories of specific heat of solids based on the crystalline hypothesis and included these boundary conditions. The condition can be stated as ψ ( r + N i a i ) = ψ ( r ) , {\displaystyle \psi (\mathbf {r} +N_{i}\mathbf {a} _{i})=\psi (\mathbf {r} ),\,} where i runs over the dimensions of the Bravais lattice, the ai are the primitive vectors of the lattice, and the Ni are integers (assuming the lattice has N cells where N=N1N2N3). This definition can be used to show that ψ ( r + T ) = ψ ( r ) {\displaystyle \psi (\mathbf {r} +\mathbf {T} )=\psi (\mathbf {r} )} for any lattice translation vector T such that: T = ∑ i N i a i . {\displaystyle \mathbf {T} =\sum _{i}N_{i}\mathbf {a} _{i}.} Note, however, the Born–von Karman boundary conditions are useful when Ni are large (infinite). The Born–von Karman boundary condition is important in solid state physics for analyzing many features of crystals, such as diffraction and the band gap. Modeling the potential of a crystal as a periodic function with the Born–von Karman boundary condition and plugging in Schrödinger's equation results in a proof of Bloch's theorem, which is particularly important in understanding the band structure of crystals. == References == Ashcroft, Neil W.; Mermin, N. David (1976). Solid state physics. New York: Holt, Rinehart and Winston. pp. 135. ISBN 978-0-03-083993-1. Leighton, Robert B.
{ "page_id": 3476372, "source": null, "title": "Born–von Karman boundary condition" }
(1948). "The Vibrational Spectrum and Specific Heat of a Face-Centered Cubic Crystal" (PDF). Reviews of Modern Physics. 20 (1): 165–174. Bibcode:1948RvMP...20..165L. doi:10.1103/RevModPhys.20.165. Ren, Shang Yuan (2017). Electronic States in Crystals of Finite Size: Quantum Confinement of Bloch Waves (2 ed.). Singapore: Springer.
{ "page_id": 3476372, "source": null, "title": "Born–von Karman boundary condition" }
The database of three-dimensional interacting domains (3did) is a biological database containing a catalogue of protein-protein interactions for which a high-resolution 3D structure is known. 3did collects and classifies all structural models of domain-domain interactions in the Protein Data Bank, providing molecular details for such interactions. 3did uses the Pfam database to define the position of protein domains in the protein structures. 3did was first published in 2005. The current version also includes a pipeline for the discovery and annotation of novel domain-motif interactions. For every interaction 3did identifies and groups different binding modes by clustering similar interfaces into “interaction topologies”. By maintaining a constantly updated collection of domain-based structural interaction templates, 3did is a reference source of information for the structural characterization of protein interaction networks. 3did is updated every six months and is available for bulk download and for browsing at http://3did.irbbarcelona.org. == See also == protein interaction three-dimensional structures == References == == External links == http://3did.irbbarcelona.org
{ "page_id": 31656862, "source": null, "title": "3did" }
NmsRA and NmsRB (Neisseria metabolic switch regulator), RcoF1 and RcoF2 (RNA regulating colonization factor) as well as NgncR_162 and NgncR_163 (Neisseria gonorrhoeae non-coding RNA) are all names of neisserial sibling small regulatory RNAs described and independently named in three publications. NmsRB/RcoF1/NgncR_163 was shown to be the predominant sibling. The sRNAs are tandemly arranged, structurally nearly identical and share 70% sequence identity. They translationally down-regulate genes involved in basic metabolic processes including tricarboxylic acid cycle enzymes and amino acid uptake and degradation. The target genes include: fumC, sdhC, gltA, sucC, prpB and prpC. The expression of the sRNAs is presumably under the control of RelA, as shown for N. meningitidis. Furthermore, the sRNAs interact with Hfq protein and target repression of putative colonization factor of the human nasopharynx PrpB mRNA, hence one of the proposed names is RNA regulating colonization factor. == See also == NrrF RNA Neisseria sigma-E sRNA Neisseria RNA thermometers == References ==
{ "page_id": 56691617, "source": null, "title": "Neisseria sibling sRNAs NmsR/RcoF" }
Iodine-125 (125I) is a radioisotope of iodine which has uses in biological assays, nuclear medicine imaging and in radiation therapy as brachytherapy to treat a number of conditions, including prostate cancer, uveal melanomas, and brain tumors. It is the second longest-lived radioisotope of iodine, after iodine-129. Its half-life is 59.392 days and it decays by electron capture to an excited state of tellurium-125. This state is not the metastable 125mTe, but rather a lower energy state. The excited 125Te may (7% chance) undergo gamma decay with a maximum energy of 35 keV. More often (93% chance), the excited 125Te undergoes internally conversion and ejects an electron (< 35 keV). The resulting electron vacancy leads to emission of characteristic X-rays (27–32 keV) and a total of 21 Auger electrons (50 to 500 eV). Eventually, stable ground state 125Te is produced as the final decay product. In medical applications, the internal conversion and Auger electrons cause little damage outside the cell which contains the isotope atom. The X-rays and gamma rays are of low enough energy to deliver a higher radiation dose selectively to nearby tissues, in "permanent" brachytherapy where the isotope capsules are left in place (125I competes with palladium-103 in such uses). Because of its relatively long half-life and emission of low-energy photons which can be detected by gamma-counter crystal detectors, 125I is a preferred isotope for tagging antibodies in radioimmunoassay and other gamma-counting procedures involving proteins outside the body. The same properties of the isotope make it useful for brachytherapy, and for certain nuclear medicine scanning procedures, in which it is attached to proteins (albumin or fibrinogen), and where a half-life longer than that provided by 123I is required for diagnostic or lab tests lasting several days. Iodine-125 can be used in scanning/imaging the thyroid, but iodine-123 is preferred
{ "page_id": 3345314, "source": null, "title": "Iodine-125" }
for this purpose, due to better radiation penetration and shorter half-life (13 hours). 125I is useful for glomerular filtration rate (GFR) testing in the diagnosis or monitoring of patients with kidney disease. Iodine-125 is used therapeutically in brachytherapy treatments of tumors. For radiotherapy ablation of tissues that absorb iodine (such as the thyroid), or that absorb an iodine-containing radiopharmaceutical, the beta-emitter iodine-131 is the preferred isotope. When studying plant immunity, 125I is used as the radiolabel in tracking ligands to determine which plant pattern recognition receptors (PRRs) they bind to. 125I is produced by the electron capture decay of 125Xe, which is an artificial isotope of xenon, itself created by neutron capture of near-stable 124Xe (it undergoes double electron capture with a half life orders of magnitude larger than the age of the universe), which makes up around 0.1% of naturally occurring xenon. Because of the artificial production route of 125I and its short half-life, its natural abundance on Earth is effectively zero. == Production == 125I is a reactor-produced radionuclide and is available in large quantities. Its production involves the two following nuclear reactions: 124Xe (n,γ) → 125mXe (57 s) → 125I (t½ = 59.4 d) 124Xe (n,γ) → 125gXe (19.9 h) → 125I (t½ = 59.4 d) The irradiation target is the primordial nuclide 124Xe, which is the target isotope for making 125I by neutron capture. It is loaded into irradiation capsules of the zirconium alloy zircaloy-2 (a corrosion resisting alloy transparent to neutrons) to a pressure of about 100 bar (~ 100 atm). Upon irradiation with slow neutrons in a nuclear reactor, several radioisotopes of xenon are produced. However, only the decay of 125Xe leads to a radioiodine: 125I. The other xenon radioisotopes decay either to stable xenon, or to various caesium isotopes, some of them radioactive
{ "page_id": 3345314, "source": null, "title": "Iodine-125" }
(a.o., the long-lived 135Cs (t½ = 1.33 Ma) and 137Cs (t½ = 30 a)). Long irradiation times are disadvantageous. Iodine-125 itself has a neutron capture cross section of 900 barns, and consequently during a long irradiation, part of the 125I formed will be converted to 126I, a beta-emitter and positron-emitter with a half-life of 12.93 days, which is not medically useful. In practice, the most useful irradiation time in the reactor amounts to a few days. Thereafter, the irradiated gas is allowed to decay for three or four days to eliminate short-lived unwanted radioisotopes, and to allow the newly produced xenon-125 (t½ = 17 hours) to decay to iodine-125. To isolate radio-iodine, the irradiated capsule is first cooled at low temperature (to condense the free iodine gas onto the capsule inner wall) and the remaining Xe gas is vented in a controlled way and recovered for further use. The inner walls of the capsule are then rinsed with a dilute NaOH solution to collect iodine as soluble iodide (I−) and hypoiodite (IO−), according to the standard disproportionation reaction of halogens in alkaline solution. Any caesium atom present immediately oxidizes and passes into the water as Cs+. In order to eliminate any long-lived 135Cs and 137Cs which may be present in small amounts, the solution is passed through a cation-exchange column, which exchanges Cs+ for another non-radioactive cation (e.g., Na+). The radioiodine (as anion I− or IO−) remains in solution as a mixture iodide/hypoiodite. == Availability and purity == Iodine-125 is commercially available in dilute NaOH solution as 125I-iodide (or the hypohalite sodium hypoiodite, NaIO). The radioactive concentration lies at 4 to 11 GBq/mL and the specific radioactivity is > 75 GBq/μmol (7.5 × 1016 Bq/mol). The chemical and radiochemical purity is high. The radionuclidic purity is also high; some 126I
{ "page_id": 3345314, "source": null, "title": "Iodine-125" }
(t1/2 = 12.93 d) is unavoidable due to the neutron capture noted above. The 126I tolerable content (which is set by the unwanted isotope interfering with dose calculations in brachytherapy) lies at about 0.2 atom % (atom fraction) of the total iodine (the rest being 125I). == Producers == As of October 2019, there were two producers of iodine-125, the McMaster Nuclear Reactor in Hamilton, Ontario, Canada; and a VVR-SM research reactor in Uzbekistan. The McMaster reactor is presently the largest producer of iodine-125, producing approximately 60 per cent of the global supply in 2018; with the remaining global supply produced at the reactor based in Uzbekistan. Annually, the McMaster reactor produces enough iodine-125 to treat approximately 70,000 patients. In November 2019, the research reactor in Uzbekistan shut down temporarily in order to facilitate repairs. The temporary shutdown threatened the global supply of the radioisotope by leaving the McMaster reactor as the sole producer of iodine-125 during the period. Prior to 2018, the National Research Universal (NRU) reactor at Chalk River Laboratories in Deep River, Ontario, was one of three reactors to produce iodine-125. However, on March 31, 2018, the NRU reactor was permanently shut down ahead of its scheduled decommissioning in 2028, as a result of a government order. The Russian nuclear reactor equipped to produce iodine-125, was offline as of December 2019. == Decay properties == The detailed decay mechanism to form the stable daughter nuclide tellurium-125 is a multi-step process that begins with electron capture, which produces a tellurium-125 nucleus in an excited state with a half-life of 1.6 ns. The excited tellurium-125 nucleus may undergo gamma decay, emitting a gamma photon at 35.5 keV, or undergo internal conversion to emit an electron. The electron vacancy from internal conversion results in a cascade of electron relaxation as
{ "page_id": 3345314, "source": null, "title": "Iodine-125" }
the core electron hole moves toward the valence orbitals. The cascade involves many characteristic X-rays and Auger transitions. In the case the excited tellurium-125 nucleus undergoes gamma decay, a different electron relaxation cascade follows before the nuclide comes to rest. Throughout the entire process an average of 13.3 electrons are emitted (10.3 of which are Auger electrons), most with energies less than 400 eV (79% of yield). The internal conversion and Auger electrons from the radioisotope have been found in one study to do little cellular damage, unless the radionuclide is directly incorporated chemically into cellular DNA, which is not the case for present radiopharmaceuticals which use 125I as the radioactive label nuclide. Rather, cellular damage results from the gamma and characteristic X-ray photons. As with other radioisotopes of iodine, accidental iodine-125 uptake in the body (mostly by the thyroid gland) can be blocked by the prompt administration of stable iodine-127 in the form of an iodide salt. Potassium iodide (KI) is typically used for this purpose. However, unjustified self-medicated preventive administration of stable KI is not recommended in order to avoid disturbing the normal thyroid function. Such a treatment must be carefully dosed and requires an appropriate KI amount prescribed by a specialised physician. == See also == Isotopes of iodine Iodine-123 Iodine-129 Iodine-131 Iodine in biology 2,5-Dimethoxy-4-iodoamphetamine (DOI), often labeled with 125I == Notes and references ==
{ "page_id": 3345314, "source": null, "title": "Iodine-125" }
The ion channel hypothesis of Alzheimer's disease (AD), also known as the channel hypothesis or the amyloid beta ion channel hypothesis, is a more recent variant of the amyloid hypothesis of AD, which identifies amyloid beta (Aβ) as the underlying cause of neurotoxicity seen in AD. While the traditional formulation of the amyloid hypothesis pinpoints insoluble, fibrillar aggregates of Aβ as the basis of disruption of calcium ion homeostasis and subsequent apoptosis in AD, the ion channel hypothesis in 1993 introduced the possibility of an ion-channel-forming oligomer of soluble, non-fibrillar Aβ as the cytotoxic species allowing unregulated calcium influx into neurons in AD. The ion channel hypothesis is broadly supported as an explanation for the calcium ion influx that disrupts calcium ion homeostasis and induces apoptosis in neurons. Because the extracellular deposition of Aβ fibrils in senile plaques is not sufficient to predict risk or onset of AD, and clinical trials of drugs that target the Aβ fibrillization process have largely failed, the ion channel hypothesis provides novel molecular targets for continued development of AD therapies and for better understanding of the mechanism underlying onset and progression of AD. == History == The ion channel hypothesis was first proposed by Arispe and colleagues in 1993 upon discovery that Aβ could form unregulated cation-selective ion channels when incorporated into planar lipid bilayers. Further research showed that a particular fragment of Aβ, Aβ (25-35), spontaneously inserts into planar lipid bilayers to form weakly selective ion channels and that membrane insertion occurs non-specifically, irreversibly, and with a broad range of oligomer conformations. Though more recent studies have found that Aβ channels can be blocked by small molecules, the broad variety of Aβ ion channel conformations and chemistries make it difficult to design a channel blocker specific to Aβ without compromising other ion channels
{ "page_id": 50662306, "source": null, "title": "Ion channel hypothesis of Alzheimer's disease" }
in the cell membrane. == Structure == The Aβ monomer generally assumes an α-helical formation in aqueous solution, but can reversibly transition between α-helix and β-sheet structures at varying polarities. Atomic force microscopy captured images of Aβ channel structures that facilitated calcium uptake and subsequent neuritic degeneration. Molecular dynamics simulations of Aβ in lipid bilayers suggest that Aβ adopts a β-sheet-rich structure within lipid bilayers that gradually evolves to result in a wide variety of relaxed channel conformations. In particular, data support the organization of Aβ channels in β-barrels, structural formations commonly seen in transmembrane pore-forming toxins including anthrax. == Properties == Aβ channels are selective for cations over anions, voltage-independent, and display a long channel lifetime, from minutes to hours. They can be extremely large, up to 5 nS in size, and can insert into the cell membrane from aqueous solution. Aβ channels are heterogeneous and allow flow of physiologically relevant ions such as Ca2+, Na+, K+, Cs+, and Li+ across the cell membrane. == Mechanism of action == === Channel formation === Cytotoxicity caused by ion channel formation is commonly seen in the world of bacteria. While eukaryotic cells are generally less vulnerable to channel-forming toxins because of their larger volume and stiffer, sterol-containing membranes, several eukaryotic channel-forming toxins have been seen to sidestep these obstacles by forming especially large, stable ion channels or anchoring to sterols in the cell membrane. Neurons are particularly vulnerable to channel-forming toxins because of their reliance on maintenance of strict Na+, K+, and Ca2+ concentration gradients and membrane potential for proper functioning and action potential propagation. Leakage caused by insertion of an ion channel such as Aβ rapidly alters intracellular ionic concentrations, resulting in energetic stress, failure of signaling, and cell death. === Ionic leakage === The large, poorly selective, and long-lived
{ "page_id": 50662306, "source": null, "title": "Ion channel hypothesis of Alzheimer's disease" }
nature of Aβ channels allows rapid degradation of membrane potential in neurons. A single Aβ channel 4 nS in size can cause Na+ concentration to change as much as 10 μM/s. Degradation of membrane potential in this manner also generates additional Ca2+ influx through voltage-sensitive Ca2+ channels in the plasma membrane. Ionic leakage alone has been demonstrated to be sufficient to rapidly disrupt cellular homeostasis and induce cell necrosis. === Mitochondrial pathway of apoptosis === Aβ channels may also trigger apoptosis through insertion in mitochondrial membranes. Aβ injection in rats has been shown to damage mitochondrial structure in neurons, decrease mitochondrial membrane potential, and increase intracellular Ca2+ concentration. Additionally, Aβ accumulation increases expression of genes associated with the mitochondrial permeability transition pore (MPTP), a non-selective, high conductance channel spanning the inner and outer mitochondrial membrane. Ca2+ influx into mitochondria can collapse mitochondrial membrane potential, causing MPTP opening, which then induces mitochondrial swelling, further dissipation of membrane potential, generation of mitochondrial reactive oxygen species (ROS), rupture of the outer mitochondrial membrane, and release of apoptogenic factors such as cytochrome c. == Therapeutic potential == === Current treatments === The only treatments currently approved for AD are either cholinesterase inhibitors (such as donepezil) or glutamate receptor antagonists (such as memantine), which show limited efficacy in treating symptoms or halting progression of AD. The slight improvement in cognitive function brought about by these drugs is only seen in patients with mild to moderate AD, and is confined to the first year of treatment, as efficacy progressively declines, completely disappearing by 2 or 3 years of treatment. Extensive research has gone into the design of potential AD treatments to reduce Aβ production or aggregation, but these therapeutics have historically failed in Phase III clinical trials. The ion channel hypothesis of AD provides a novel
{ "page_id": 50662306, "source": null, "title": "Ion channel hypothesis of Alzheimer's disease" }
avenue for development of AD therapies that may more directly target the underlying pathophysiology of AD. === Channel blockers === Nonspecific Aβ channel blockers including tromethamine (Tris) and Zn2+ have successfully inhibited Aβ cytotoxicity. Least-energy molecular models of the Aβ channel have been used to create polypeptide segments to target the mouth of the Aβ pore, and these selective Aβ channel blockers have also been shown to inhibit Aβ cytotoxicity. Structural modeling of Aβ channels, however, suggests that the channels are highly polymorphic, with the ability to move and change size and shape within the lipid membrane. The broad range of conformations adopted by the Aβ channel makes design of a specific, highly effective Aβ channel blocker difficult. === Membrane hyperpolarization === Indirect methods such as membrane hyperpolarization may help limit the cytotoxic depolarizing effects of Aβ channels. Potassium ATP channel activation has been demonstrated to attenuate Ca2+ influx and reduce oxidative stress in neurons, as well as to improve memory and reduce Aβ and tau pathology in a transgenic AD mouse model. Similarly, drugs that block voltage-gated Ca2+ channels have also been shown to protect neurons from Aβ toxicity. == Other amyloid channels == Several other classes of amyloid proteins also form ion channels, including proteins implicated in type II diabetes mellitus, prion diseases, Parkinson's disease, and Huntington's disease. Consistent with Aβ channels, other amyloid channels have also been reported to be large, non-selective, voltage-independent, heterogeneous, and irreversible. These distinct properties set amyloid channels apart from other ion channels in neurons and facilitate unregulated ionic leakage resulting in cell depolarization, disruption of ion homeostasis, and cell death. Further investigation of amyloid proteins and the cytotoxic effects of amyloid channel formation is necessary for development of drug candidates that are able to selectively block amyloid channels or bind them prior
{ "page_id": 50662306, "source": null, "title": "Ion channel hypothesis of Alzheimer's disease" }
to membrane insertion, an area of research that may prove highly relevant to not just AD but a wide variety of other diseases. == References ==
{ "page_id": 50662306, "source": null, "title": "Ion channel hypothesis of Alzheimer's disease" }
Cytolysis, or osmotic lysis, occurs when a cell bursts due to an osmotic imbalance that has caused excess water to diffuse into the cell. Water can enter the cell by diffusion through the cell membrane or through selective membrane channels called aquaporins, which greatly facilitate the flow of water. It occurs in a hypotonic environment, where water moves into the cell by osmosis and causes its volume to increase to the point where the volume exceeds the membrane's capacity and the cell bursts. The presence of a cell wall prevents the membrane from bursting, so cytolysis only occurs in animal and protozoa cells which do not have cell walls. The reverse process is plasmolysis. == In bacteria == Osmotic lysis would be expected to occur when bacterial cells are treated with a hypotonic solution with added lysozyme, which destroys the bacteria's cell walls. == Prevention == Different cells and organisms have adapted different ways of preventing cytolysis from occurring. For example, the paramecium uses a contractile vacuole, which rapidly pumps out excessive water to prevent the build-up of water and the otherwise subsequent lysis. == See also == Cell disruption Crenation Lysis Osmotic pressure Plasmolysis Water intoxication == References == === General references === McClendon, Jesse Francis (1917). Physical Chemistry of Vital Phenomena. University of California: Princeton University Press. p. 240. === Inline citations ===
{ "page_id": 986020, "source": null, "title": "Cytolysis" }
A neochromosome is a chromosome that is not normally found in nature. Cancer-associated neochromosomes are found in some cancer cells. Neochromosomes have also been created using genetic engineering techniques. == Cancer-associated neochromosomes == Cancer-associated neochromosomes are giant supernumerary chromosomes. They harbor the mutations that drive certain cancers (highly amplified copies of key oncogenes, such as MDM2, CDK4, HMGA2). They may be circular or linear chromosomes. They have functional centromeres, and telomeres when linear. They are rare overall, being found in about 3% of cancers, but are common in certain rare cancers. For example, they are found in 90% of parosteal osteosarcomas. Neochromosomes from well- and de-differentiated liposarcoma have been studied at high resolution by isolation (using flow sorting) and sequencing, as well as microscopy. They consist of hundreds of fragments of DNA, often derived from multiple normal chromosomes, stitched together randomly, and contain high levels of DNA amplification (~30-60 copies of some genes). Using statistical inference and mathematical modelling, the process of how neochromosomes initially form and evolve has been made clearer. Fragments of DNA produced following chromothriptic shattering of chromosome 12 undergo DNA repair to form of a circular or ring chromosome. This undergoes hundreds of circular breakage-fusion-bridge cycles, causing random amplification and deletion of DNA with selection for the amplification of key oncogenes. DNA from additional chromosomes is somehow added during this process. Erosion of centromeres can lead to the formation of neocentromeres or the capture of new native centromeres from other chromosomes. The process ends when the neochromosome forms a linear chromosome following the capture of telomeric caps, which can be chromothriptically derived. == References ==
{ "page_id": 44370851, "source": null, "title": "Neochromosome" }
Surface science is the study of physical and chemical phenomena that occur at the interface of two phases, including solid–liquid interfaces, solid–gas interfaces, solid–vacuum interfaces, and liquid–gas interfaces. It includes the fields of surface chemistry and surface physics. Some related practical applications are classed as surface engineering. The science encompasses concepts such as heterogeneous catalysis, semiconductor device fabrication, fuel cells, self-assembled monolayers, and adhesives. Surface science is closely related to interface and colloid science. Interfacial chemistry and physics are common subjects for both. The methods are different. In addition, interface and colloid science studies macroscopic phenomena that occur in heterogeneous systems due to peculiarities of interfaces. == History == The field of surface chemistry started with heterogeneous catalysis pioneered by Paul Sabatier on hydrogenation and Fritz Haber on the Haber process. Irving Langmuir was also one of the founders of this field, and the scientific journal on surface science, Langmuir, bears his name. The Langmuir adsorption equation is used to model monolayer adsorption where all surface adsorption sites have the same affinity for the adsorbing species and do not interact with each other. Gerhard Ertl in 1974 described for the first time the adsorption of hydrogen on a palladium surface using a novel technique called LEED. Similar studies with platinum, nickel, and iron followed. Most recent developments in surface sciences include the 2007 Nobel prize of Chemistry winner Gerhard Ertl's advancements in surface chemistry, specifically his investigation of the interaction between carbon monoxide molecules and platinum surfaces. == Chemistry == Surface chemistry can be roughly defined as the study of chemical reactions at interfaces. It is closely related to surface engineering, which aims at modifying the chemical composition of a surface by incorporation of selected elements or functional groups that produce various desired effects or improvements in the properties of
{ "page_id": 68513, "source": null, "title": "Surface science" }
the surface or interface. Surface science is of particular importance to the fields of heterogeneous catalysis, electrochemistry, and geochemistry. === Catalysis === The adhesion of gas or liquid molecules to the surface is known as adsorption. This can be due to either chemisorption or physisorption, and the strength of molecular adsorption to a catalyst surface is critically important to the catalyst's performance (see Sabatier principle). However, it is difficult to study these phenomena in real catalyst particles, which have complex structures. Instead, well-defined single crystal surfaces of catalytically active materials such as platinum are often used as model catalysts. Multi-component materials systems are used to study interactions between catalytically active metal particles and supporting oxides; these are produced by growing ultra-thin films or particles on a single crystal surface. Relationships between the composition, structure, and chemical behavior of these surfaces are studied using ultra-high vacuum techniques, including adsorption and temperature-programmed desorption of molecules, scanning tunneling microscopy, low energy electron diffraction, and Auger electron spectroscopy. Results can be fed into chemical models or used toward the rational design of new catalysts. Reaction mechanisms can also be clarified due to the atomic-scale precision of surface science measurements. === Electrochemistry === Electrochemistry is the study of processes driven through an applied potential at a solid–liquid or liquid–liquid interface. The behavior of an electrode–electrolyte interface is affected by the distribution of ions in the liquid phase next to the interface forming the electrical double layer. Adsorption and desorption events can be studied at atomically flat single-crystal surfaces as a function of applied potential, time and solution conditions using spectroscopy, scanning probe microscopy and surface X-ray scattering. These studies link traditional electrochemical techniques such as cyclic voltammetry to direct observations of interfacial processes. === Geochemistry === Geological phenomena such as iron cycling and soil contamination
{ "page_id": 68513, "source": null, "title": "Surface science" }