id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
73,481,178 | https://en.wikipedia.org/wiki/Immunosequencing | Immunosequencing, sometimes referred to as repertoire sequencing or Rep-Seq, is a method for analyzing the genetic makeup of an individual's immune system.
Background
In most areas of biology a single gene codes for one or a few possible proteins. Through V(D)J recombination a number of organisms take a relatively small number of genes coding for antibodies and T-cell receptors (TCRs) and produce a huge diversity of slightly different antibodies and TCRs. The diversity allows for the recognition of a wide array of antigens. As an immune system reacts to infections and other events, the number of different antibodies and TCRs it contains changes. The makeup and quantity of these proteins is sometimes referred to as an immune repertoire.
Immunosequencing is a technique utilizing multiplex polymerase chain reaction that allows for the sequencing and quantification of the large diversity of antibody and TCR genes composing an individual's immune repertoire.
History
Immunosequencing in its modern context started being discussed in scientific literature in the early 2010s with the advent of more powerful gene sequencing techniques.
References
Molecular biology
Laboratory techniques
DNA profiling techniques | Immunosequencing | [
"Chemistry",
"Biology"
] | 237 | [
"Genetics techniques",
"DNA profiling techniques",
"Biotechnology stubs",
"nan",
"Molecular biology",
"Biochemistry"
] |
67,665,947 | https://en.wikipedia.org/wiki/Crystal%20plasticity | Crystal plasticity is a mesoscale computational technique that takes into account crystallographic anisotropy in modelling the mechanical behaviour of polycrystalline materials. The technique has typically been used to study deformation through the process of slip, however, there are some flavors of crystal plasticity that can incorporate other deformation mechanisms like twinning and phase transformations. Crystal plasticity is used to obtain the relationship between stress and strain that also captures the underlying physics at the crystal level. Hence, it can be used to predict not just the stress-strain response of a material, but also the texture evolution, micromechanical field distributions, and regions of strain localisation. The two widely used formulations of crystal plasticity are the one based on the finite element method known as Crystal Plasticity Finite Element Method (CPFEM), which is developed based on the finite strain formulation for the mechanics, and a spectral formulation which is more computationally efficient due to the fast Fourier transform, but is based on the small strain formulation for the mechanics.
Basic concepts
Crystal plasticity assumes that any deformation that is applied to a material is accommodated by the process of slip, where dislocation motion occurs on a slip system. Further, Schmid's law is assumed to be a valid, where a given slip system is said to be active when the resolved shear stress along the slip system exceeds the critical resolved shear stress of the slip system. Since the applied deformation occurs in the macroscopic sample reference frame and slip occurs in the single crystal reference frame, in order to consistently apply the constitutive relations, an orientation map (e.g. using Bunge Euler angles) is required for each grain in the polycrystal. This orientation information can be used to transform the relevant tensors between the crystal frame of reference and the sample frame of reference. The slip systems are described by the Schmid tensor, which is tensor product of the Burgers vector and the slip plane normal, and the Schmid tensor is used to obtain the resolved shear stress in each slip system. Each slip system can undergo different amounts of shearing, and obtaining these shear rates lies at the crux of crystal plasticity. Further, by keeping track of the accumulated strain, the critical resolved shear stress is updated according to various hardening models (e.g. Voce hardening law), and this recovers the observed macroscopic stress-strain response for the material. The texture evolution is captured by updating the crystallographic orientation of the grains based on how much each grain deforms.
Further reading
References
Continuum mechanics
Deformation (mechanics) | Crystal plasticity | [
"Physics",
"Materials_science",
"Engineering"
] | 536 | [
"Continuum mechanics",
"Deformation (mechanics)",
"Classical mechanics stubs",
"Materials science",
"Classical mechanics"
] |
67,672,745 | https://en.wikipedia.org/wiki/Anaritide | Anaritide (also known as human atrial natriuretic peptide [102-126]) is a synthetic analogue of atrial natriuretic peptide (ANP).
Structure
Anartidine has the following primary structure:
RSSCFGGRMDRIGAQSGLGCNSFRY
or
H-Arg-Ser-Ser-Cys-Phe-Gly-Gly-Arg-Met-Asp-Arg-Ile-Gly-Ala-Gln-Ser-Gly-Leu-Gly-Cys-Asn-Ser-Phe-Arg-Tyr-OH
This structure is identical to residues 102-126 of human preproANP. In comparison, active human ANP comprises resides 99-126 of human preproANP.
Medical uses
Anaritide has been investigated as a potential therapy for acute tubular necrosis but was shown not to improve the dialysis-free survival of these patients. It also appears to exacerbate proteinuria and natriuresis in patients with nephrotic syndrome.
References
Peptides | Anaritide | [
"Chemistry"
] | 231 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
70,562,226 | https://en.wikipedia.org/wiki/Shabnam%20Akhtari | Shabnam Akhtari is a Canadian-Iranian mathematician specializing in number theory, and in particular in Diophantine equations, Thue equations, and the geometry of numbers. She is a professor of mathematics at Pennsylvania State University.
Education and career
Akhtari graduated from the Sharif University of Technology, in Tehran, in 2002 with a bachelor's degree in mathematics. She went to the University of British Columbia for graduate study in mathematics, completing her Ph.D. there in 2008. Her dissertation, Thue Equations and Related Topics, was supervised by Mike Bennett.
She was a postdoctoral researcher at Queen's University at Kingston in Canada, the Max Planck Institute for Mathematics in Germany and the Centre de Recherches Mathématiques in Canada before joining the University of Oregon faculty as an assistant professor of mathematics in 2012. She was tenured as an associate professor there in 2018.
Recognition
Akhtari is the 2021–2022 winner of the Ruth I. Michler Memorial Prize of the Association for Women in Mathematics.
References
External links
Home page
Year of birth missing (living people)
Living people
21st-century Iranian mathematicians
Iranian women mathematicians
Sharif University of Technology alumni
University of British Columbia alumni
University of Oregon faculty
Number theorists
21st-century Iranian women scientists | Shabnam Akhtari | [
"Mathematics"
] | 255 | [
"Number theorists",
"Number theory"
] |
70,564,555 | https://en.wikipedia.org/wiki/Ultrasound-switchable%20fluorescence%20imaging | Ultrasound-switchable fluorescence (USF) imaging is a deep optics imaging technique. In last few decades, fluorescence microscopy has been highly developed to image biological samples and live tissues. However, due to light scattering, fluorescence microscopy is limited to shallow tissues (about 1 mm). Since fluorescence is characterized by high contrast, high sensitivity, and low cost which is crucial to investigate deep tissue information, developing fluorescence imaging technique with high depth-to-resolution ratio would be promising.. Recently, ultrasound-switchable fluorescence imaging has been developed to achieve high signal-to-noise ratio (SNR) and high spatial resolution imaging without sacrificing image depth.
Basic principle
The theoretical model was first proposed by Yuan in 2009, he developed an ultrasound-modulated fluorescence based on a fluorophore-quencher-labeled microbubble system which can control the fluorescent emission inside the ultrasound-focal zone to increase the spatial resolution and SNR o f the imaging. In terms of the USF imaging principle, a short ultrasound pulse is applied to activate the fluorescent emission inside the ultrasound focal volume without triggering fluorescence outside of the ultrasound focal volume. Thus, the fluorophores distribution in the ultrasound focal zone can be distinguished and imaged by screening the target. Two basic elements are required in USF imaging technique, the first is unique USF contrast agents whose fluorescence emission can be controlled by a focused ultrasound wave. Secondly, a sensitive USF imaging system is also required to detect the signal and suppress the background noise.
Imaging contrast agents
At present, two types of contrast agents have been developed.
Fluorophore-quencher-labeled microbubble
The first type is fluorophore-quencher-labeled microbubble which is first discovered by Yuan in 2019, and developed by Liu et al. in 2014. The basic principle of this type of contrast agent is to change the fluorophore concentration on the microbubble surface. In 2000, Morgan et al. found that negative ultrasould wave can make the microbubble several times bigger. As a result, the distance between quencher and fluorophore on microbubble surface become larger (the concentration of the flurophore on the surface are reduced) which means the quenching efficiency is extremely decreased and the fluorophore shows high emission efficiency (ON state). The microbubble outside the ultrasound focal zone keep the same small size during the whole process, so the quenching efficiency is always high enough to suppress the fluorophore emission (OFF state).
Fluorophore-labeled thermosensitive polymers or fluorophore-encapsulated nanoparticles (NPs)
The second type of contrast agents is fluorophore-labeled thermosensitive polymers or fluorophore-encapsulated nanoparticles (NPs). The critical part of this kind of agent is the combination of the thermo-sensitive carrier and environment-sensitive (usually polarity-sensitive) fluorophore labeled on it. When the environment temperature is under a certain threshold (Tth1), the polarity of the carrier on which the fluorophore shows quite low emission efficiency (OFF state). When focused ultrasound is applied, the focal zone is heated above a temperature threshold (Tth2) and the structure of the thermo-sensitive carrier will be changed which makes the polarity of it changes too, therefore, the polarity-sensitive fluorophore will be swathed on. During the whole process, the fluorophore outside of the ultrasound focal zone keep switched off because the temperature is under Tth1.
USF imgaing system
The purpose of the USF imaging system is to sensitively detect the USF signal and dramatically suppress the background noise.
The image system first dramatically increase the system sensitivity by adopting a lock-in amplifier and a cooled photomultiplier tube(PMT);
Then the system use a correlation algorithm to distinguish the USF signal from the background noise;
Also, it detects only the change of the fluorescence signal caused by the ultrasound, The modulated-frequency excitation laser keep running all the time, the ultrasound-induced temperature rise change the amplitude of the fluorescence signal in modulated frequency. After interfering with a phase-locked reference signal, the lock-in amplifier reports the USF signal;
The system can also reduce laser leakage by using several emission filters.
Signal to Noise ratio
USF imaging can increase the SNR by differentiating signal photons from background photons. The background photons may come from autofluoresence, light scattering, imperfect contrast agents and laser leakage. To reduce autofluoresence, the NIR fluorophore can be adopted since the biological tissue components produce least autofloresence in NIR region. According to rayleigh theory:
I(r,θ) = 1/λ4
The light with large wavelength scatter less, so the light scattering which result in part of the background noise can be reduced. Also, by adopting ultrasound to control the fluorescent emission, the signal fluorophore can be easily differentiate from the background fluorophore. As we mentioned above, the laser leakage can be minimized by emission filters.
Spatial resolution
When using second type of contrast agents (fluorophore-labeled thermosensitive NPs), the spatial resolution can be further improved based on two mechanisms.
Nonlinear acoustic effect
Acoustic diffraction is the main obstacle to increase the spatial resolution. By controlling ultrasound exposure power, the nonlinear acoustic effect can occur, as a result, a part of acoustic energy at the fundamental frequency can be transferred to higher harmonic frequency components in the focal volume which can be more tightly focused. This is the major reason that nonlinear acoustic effect can reduce the ultrasound-induced temperature focal size.
Thermal confinement
The spatial resolution of the USF technique is determined by the size of the region where the fluorophores can be switched ON. Only the temperature is above the threshold, the fluorephore can be switched on. However, due to the thermal diffusion or conduction, ultrasound-induced thermal energy need to be confined within the focal volume size by controlling the ultrasound exposure time, so the fluorophores can be switched ON is usually smaller than the actual focal size of the ultrasound.
Applications
The USF technique can be combined with a light-pulse-delay technique and a photon counting technique to achieve high-resolution imaging in a deep turbid medium. In 2016, Cheng et al. achieved high-resolution fluorescence imaging in centimeter-deep tissue phantoms with high SNR and high sensitivity, they synthesized and characterized a NIR extremely environment-sensitive fluorophore, ADP(CA)2, and a family of USF contrast agents based on this dye. In the recent study in 2019, Yao et al. first achieved in vivo ultrasound-switchable fluorescence imaging in mice with high resolution. ICG-encapsulated PNIPAM nanoparticles was adopted as contrast agents which is quite stable in biological environment. Compared with CT imaging results, they found USF imaging maintained high sensitivity and specificity in deep tissue.
References
Ultrasound
Optical microscopy techniques
Fluorescence | Ultrasound-switchable fluorescence imaging | [
"Chemistry"
] | 1,505 | [
"Luminescence",
"Fluorescence"
] |
70,569,824 | https://en.wikipedia.org/wiki/Structured%20illumination%20light%20sheet%20microscopy | Structured illumination light sheet microscopy (SI-LSM) is an optical imaging technique used for achieving volumetric imaging with high temporal and spatial resolution in all three dimensions. It combines the ability of light sheet microscopy to maintain spatial resolution throughout relatively thick samples with the higher axial and spatial resolution characteristic of structured illumination microscopy. SI-LSM can achieve lateral resolution below 100 nm in biological samples hundreds of micrometers thick.
SI-LSM is most often used for fluorescent imaging of living biological samples, such as cell cultures. It is particularly useful for longitudinal studies, where high-rate imaging must be performed over long periods of time without damaging the sample. The two methods most used for fluorescent imaging of 3D samples – confocal microscopy and widefield microscopy – both have significant drawbacks for this type of application. In widefield microscopy, both in-focus light from the plane of interest as well as out-of-focus light from the rest of the sample is acquired together, creating the “missing cone problem” which makes high resolution imaging difficult. Although confocal microscopy largely solves this problem by using a pinhole to block unfocused light, this technique also inevitably blocks useful signal, which is particularly detrimental in fluorescent imaging when the signal is already very weak. In addition, both widefield and confocal microscopy illuminate the entirety of the sample throughout imaging, which leads to problems with photobleaching and phototoxicity in some samples. While light-field microscopy alone can address most of these issues, its achieved resolution is still fundamentally limited by the diffraction of light and it is unable to achieve super-resolution.
SI-LSM works by using a patterned rather than uniform light sheet to illuminate a single plane of a volume being imaged. In this way, it maintains the many benefits of light-sheet microscopy while achieving the high resolution of structured illumination microscopy.
Background and Theory
The theory behind SI-LSM is best understood by considering the separate development of structured illumination and light sheet microscopy.
Structured Illumination Microscopy
Structured illumination microscopy (SIM) is a method of super-resolution microscopy which is performed by acquiring multiple images of the same sample under different patterns of illumination, then computationally combining these images to achieve a single reconstruction with up to 2x improvement over the diffraction limited lateral resolution. The theory was first proposed and implemented in a 1995 paper by John M. Guerra in which a silicon grating with 50 nm lines and spaces was resolved with 650 nm wavelength (in air) illumination structured by a transparent replica proximal to said grating. The name “structured illumination microscopy” was coined in 2000 by M.G.L. Gustafsson. SIM takes advantage of the “Moiré Effect”, which occurs when two patterns are multiplicatively superimposed. The superimposition causes “Moiré Fringes” to appear, which are coarser than either original pattern but still contain information about the high frequency patterns which would otherwise not be visible.
The theory behind SIM is best understood in the Fourier or frequency domain. In general, imaging systems can only resolve frequencies below the diffraction limit. Thus, in the Fourier domain, all recorded frequencies from the imaged sample would reside within a circle of a fixed radius. Any frequencies outside this limit cannot be resolved. However, the frequency spectrum can be shifted by imaging the sample with patterned illumination. Most often, the pattern is a 1D sinusoidal gradient, such as the pattern used to create the Moiré fringes in the above image. Because the Fourier transform of a sinusoid is a shifted delta function, the transform of this pattern will consist of three delta functions: one at the zero frequency and two corresponding to the positive and negative frequency components of the sinusoid (see below image). When the target is illuminated using this pattern, the target and illumination pattern are multiplicatively superimposed, which means the Fourier transform of the resulting image is the convolution of the individual transforms of the target and the illumination pattern. Convolving any function with a delta function has the effect of shifting the center of the original function to the location of the delta function. Thus, in this situation, the frequency spectrum of the target is shifted and frequencies that were previously too high to resolve now lie within the circle of resolvable frequencies. The result is that for a single image acquisition with SIM, the frequency components from three separate regions in the Fourier domain (corresponding to the center and the positive and negative shifts) are all captured together. Finally, because rotation in the spatial domain results in the same rotation in the Fourier domain, high frequencies over the full 360° can be captured by rotating the illumination pattern. Figure b) in the image below shows which frequency components would be captured by acquiring 4 separate images and rotating the illumination pattern by 45° in between each acquisition.
Once all images have been captured, a single final image can be computationally reconstructed. Using this technique, resolution can be improved up to 2x over the diffraction limit. This 2x limit is imposed because the illumination pattern itself is still diffraction limited.
The concepts behind 2D SIM can be expanded to 3D volumetric imaging. By using three mutually coherent beams of excitation light, interference patterns with multiple frequency components can be created in the imaged sample. This ultimately makes it possible to perform 3D reconstructions with up to 2x improved resolution along all three axes. However, due to the strong scattering coefficient of biological tissues, this theoretical resolution can only be achieved in samples thinner than about 10 um. Beyond that, the scattering leads to an excess of background signal which makes accurate reconstruction impossible.
Light Sheet Microscopy
Light sheet microscopy (LSM) was developed to allow for fine optical sectioning of thick biological samples without the need for physical sectioning or clearing, which are both time consuming and detrimental to in-vivo imaging. While most fluorescent imaging techniques use aligned illumination and detection axes, LSM utilizes orthogonal axes. A focused light sheet is used to illuminate the sample from the side, while the fluorescent signal is detected from above. This both eliminates the “cone problem” of widefield microscopy by eliminating out-of-focus contributions from planes not being actively imaged and reduces the impact of photobleaching since the entire sample is not illuminated throughout imaging. In addition, because the sample is illuminated from the side, the focus of the illumination light is not depth-dependent, making volumetric imaging of biological samples far more feasible. A major ongoing challenge in LSM is in shaping the light sheet. In general, there is a tradeoff between the thickness of the light sheet at the optical axis (which largely determines axial resolution) and the field-of-view over which the light sheet maintains adequate thickness. This problem can be partially addressed by the added resolution from SI-LSM.
Techniques
SI-LSM can be divided into two main categories. Optical Sectioning SI-LSM is the most common approach and improves axial resolution by further reducing the impact of un-focused background signal. Super-resolution SI-LSM uses the illumination and reconstruction techniques of 2D SIM to achieve super-resolution in 3D samples.
Optical Sectioning SI-LSM
Optical sectioning SI-LSM (OS-SI-LSM) was first described in a 1997 paper by M.A. Neil et al. Rather than achieving super-resolution, this technique uses the ideas behind structured illumination to improve axial resolution by removing background haze from layers other than where the illuminating light sheet is most focused. While there are several approaches for achieving this, the most common approach is known as “three-phase” SIM, which will be described here.
It is shown in the Neil paper that the signal acquired by imaging a target with a grid illumination pattern can be represented by the following equation:
Here, is the background signal, while and are signals from the region of the target illuminated by the cosine and sine components of the grid. It is also shown that an in-focus image of the plane of interest could be reconstructed using the equation:
This can be achieved by acquiring three separate images under the grid illumination conditions, rotating the grid by 60° between each acquisition. The desired 2D image can then be reconstructed using the equation:
This creates a 2D image containing only information from most focused region of the grid illumination pattern. If this pattern is created using a light sheet, the sheet can then be scanned in the axial direction to generate a full 3D reconstruction of a sample. The primary drawback of using this approach for reducing background signal is that it ultimately relies on subtracting out the shared background signal between two images. Some in-focus signal will inevitably be subtracted alongside the background haze. This will result in an overall reduction of signal, which can be detrimental in low-signal fluorescent imaging. Nevertheless, this technique is the most common use of SI-LSM and has shown improved axial resolution over LSM alone.
Super-resolution SI-LSM
Super-resolution SI-LSM (SR-SI-LSM) uses the techniques from 2D or 3D SIM while using a light sheet as the illumination source to achieve the spatial resolution of SIM alongside the depth independent imaging and low photobleaching of LSM. In the most common application, a light sheet is used to create a 1D sinusoidal pattern at a single plane of the 3D target sample. The pattern is then rotated multiple times at this single plane to acquire enough images for a high resolution 2D reconstruction. The light is scanned in the axial resolution and the process is repeated until there are enough 2D images for a full 3D reconstruction. In general, this approach demonstrates not only improved resolution but also improved SNR over OS-SI-LSM, because no information is discarded in the reconstruction. In addition, although the theoretical resolution for SR-SI-LSM is slightly lower than 3D SIM, in depths >10 um this technique shows improved performance over 3D SIM due to the depth-independent focusing of illumination light characteristic of LSM.
Implementation
A major challenge in SI-LSM is engineering systems which are physically capable of generating structured patterns in light sheets. The three main approaches for accomplishing this are using interfering light sheets, digital LSM, and spatial light modulators.
With interfering light sheets, two coherent counterpropagating sheets are sent into the sample. The interference pattern between these sheets creates the desired illumination pattern, which can be rotated and scanned using rotating mirrors to deflect the sheets. Additional flexibility can be added by using digital light-sheet microscopy to generate the illumination patterns. In digital LSM, the light sheet is created by rapidly scanning a laser beam through the sample. This allows for fine control over the specific illumination pattern by modulating the intensity of the laser as it scans. This technique has been used to create systems capable of multiple types of light sheet microscopy in addition to SI-LSM. Finally, spatial light modulators can be used to electronically control the light patterns, which has the advantage of allowing for very fine control of and fast switching between patterns.
In addition, much of the recent work around SI-LSM focuses on combining the approach with other techniques for deep imaging in biological tissues. For instance, a 2021 paper demonstrated the use of SI-LSM with NIR-II illumination to improve resolution of transcranial mouse brain imaging by ~1.7x with a penetration depth of ~750 um and almost 16x improvement in the signal to background ratio. Other promising directions include combining SIM with other techniques for shaping the light sheets in LSM, combining SI-LSM with two-photon excitation, or using non-linear fluorescence to further push the resolution limits.
References
Microscopy
Optical imaging
Volumetric instruments | Structured illumination light sheet microscopy | [
"Chemistry",
"Technology",
"Engineering"
] | 2,391 | [
"Volumetric instruments",
"Measuring instruments",
"Microscopy"
] |
70,571,122 | https://en.wikipedia.org/wiki/Isolume | Isolumes are the preferred light zone of an organism in the ocean in the preferendum hypothesis. The preferendum hypothesis suggests that some organisms living in the mesopelagic zone, change their depth as light levels change in order to remain in their isolume. Organisms prefer to remain within a certain light level for a variety of reason. Some organisms, like Sergestidae, Euphausiid, and Palinuridae, use bioluminescence to camouflage their existence from predators and they change their depth as conditions change to stay in their isolume. Zooplankton in Arctic and Antarctic regions will remain at the same depth for months at a time due to the long winters with little to no daylight.
Organisms of the same species do not always exist in the same isolume and numerous factors can change what light levels an organism prefers to live within including age, sex, and competition for food.
References
Oceanography | Isolume | [
"Physics",
"Environmental_science"
] | 194 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
70,571,129 | https://en.wikipedia.org/wiki/Finite%20sphere%20packing | In mathematics, the theory of finite sphere packing concerns the question of how a finite number of equally-sized spheres can be most efficiently packed. The question of packing finitely many spheres has only been investigated in detail in recent decades, with much of the groundwork being laid by László Fejes Tóth.
The similar problem for infinitely many spheres has a longer history of investigation, from which the Kepler conjecture is most well-known. Atoms in crystal structures can be simplistically viewed as closely-packed spheres and treated as infinite sphere packings thanks to their large number.
Sphere packing problems are distinguished between packings in given containers and free packings. This article primarily discusses free packings.
Packing and convex hulls
In general, a packing refers to any arrangement of a set of spatially-connected, possibly differently-sized or differently-shaped objects in space such that none of them overlap. In the case of the finite sphere packing problem, these objects are restricted to equally-sized spheres. Such a packing of spheres determines a specific volume known as the convex hull of the packing, defined as the smallest convex set that includes all the spheres.
Packing shapes
There are many possible ways to arrange spheres, which can be classified into three basic groups: sausage, pizza, and cluster packing.
Sausage packing
An arrangement in which the midpoint of all the spheres lie on a single straight line is called a sausage packing, as the convex hull has a sausage-like shape. An approximate example in real life is the packing of tennis balls in a tube, though the ends must be rounded for the tube to coincide with the actual convex hull.
Pizza packing
If all the midpoints lie on a plane, the packing is a pizza packing. Approximate real-life examples of this kind of packing include billiard balls being packed in a triangle as they are set up. This holds for packings in three-dimensional Euclidean space.
Cluster packing
If the midpoints of the spheres are arranged throughout 3D space, the packing is termed a cluster packing. Real-life approximations include fruit being packed in multiple layers in a box.
Relationships between types of packing
By the given definitions, any sausage packing is technically also a pizza packing, and any pizza packing is technically also a cluster packing. In the more general case of dimensions, "sausages" refer to one-dimensional arrangements, "clusters" to -dimensional arrangements, and "pizzas" to those with an in-between number of dimensions.
One or two spheres always make a sausage. With three, a pizza packing (that is not also a sausage) becomes possible, and with four or more, clusters (that are not also pizzas) become possible.
Optimal packing
The empty space between spheres varies depending on the type of packing. The amount of empty space is measured in the packing density, which is defined as the ratio of the volume of the spheres to the volume of the total convex hull. The higher the packing density, the less empty space there is in the packing and thus the smaller the volume of the hull (in comparison to other packings with the same number and size of spheres).
To pack the spheres efficiently, it might be asked which packing has the highest possible density. It is easy to see that such a packing should have the property that the spheres lie next to each other, that is, each sphere should touch another on the surface. A more exact phrasing is to form a graph which assigns a vertex for each sphere and connects vertices with edges whenever the corresponding spheres if their surfaces touch. Then the highest-density packing must satisfy the property that the corresponding graph is connected.
Sausage catastrophe
With three or four spheres, the sausage packing is optimal. It is believed that this holds true for any up to along with . For and , a cluster packing exists that is more efficient than the sausage packing, as shown in 1992 by Jörg Wills and Pier Mario Gandini. It remains unknown what these most efficient cluster packings look like. For example, in the case , it is known that the optimal packing is not a tetrahedral packing like the classical packing of cannon balls, but is likely some kind of octahedral shape.
The sudden transition in optimal packing shape is jokingly known by some mathematicians as the sausage catastrophe (Wills, 1985). The designation catastrophe comes from the fact that the optimal packing shape suddenly shifts from the orderly sausage packing to the relatively unordered cluster packing and vice versa as one goes from one number to another, without a satisfying explanation as to why this happens. Even so, the transition in three dimensions is relatively tame; in dimensions the sudden transition is conjectured to happen around 377000 spheres.
For dimensions , the optimal packing is always either a sausage or a cluster, and never a pizza. It is an open problem whether this holds true for all dimensions. This result only concerns spheres and not other convex bodies; in fact Gritzmann and Arhelger observed that for any dimension there exists a convex shape for which the closest packing is a pizza.
Example of the sausage packing being non-optimal
In the following section it is shown that for 455 spheres the sausage packing is non-optimal, and that there instead exists a special cluster packing that occupies a smaller volume.
The volume of a convex hull of a sausage packing with spheres of radius is calculable with elementary geometry. The middle part of the hull is a cylinder with length while the caps at the end are half-spheres with radius . The total volume is therefore given by.
Similarly, it is possible to find the volume of the convex hull of a tetrahedral packing, in which the spheres are arranged so that they form a tetrahedral shape, which only leads to completely filled tetrahedra for specific numbers of spheres. If there are spheres along one edge of the tetrahedron, the total number of spheres is given by
.
Now the inradius of a tetrahedral with side length is
.
From this we have
.
The volume of the tetrahedron is then given by the formula
In the case of many spheres being arranged inside a tetrahedron, the length of an edge increases by twice the radius of a sphere for each new layer, meaning that for layers the side length becomes
.
Substituting this value into the volume formula for the tetrahedron, we know that the volume of the convex hull must be smaller than the tetrahedron itself, so that
.
Taking the number of spheres in a tetrahedron of layers and substituting into the earlier expression to get the volume of the convex hull of a sausage packing with the same number of spheres, we have
.
For , which translates to spheres the coefficient in front of is about 2845 for the tetrahedral packing and 2856 for the sausage packing, which implies that for this number of spheres the tetrahedron is more closely packed.
It is also possible with some more effort to derive the exact formula for the volume of the tetrahedral convex hull , which would involve subtracting the excess volume at the corners and edges of the tetrahedron. This allows the sausage packing to be proved non-optimal for smaller values of and therefore .
Sausage conjecture
The term sausage comes from the mathematician László Fejes Tóth, who posited the sausage conjecture in 1975, which concerns a generalized version of the problem to spheres, convex hulls, and volume in higher dimensions. A generalized sphere in dimensions is a -dimensional body in which every boundary point lies equally far away from the midpoint. Fejes Tóth's sausage conjecture then states that from upwards it is always optimal to arrange the spheres along a straight line. That is, the sausage catastrophe no longer occurs once we go above 4 dimensions. The overall conjecture remains open. The best results so far are those of Ulrich Betke und Martin Henk, who proved the conjecture for dimensions 42 and above.
Parametric density and related methods
While it may be proved that the sausage packing is not optimal for 56 spheres, and that there must be some other packing that is optimal, it is not known what the optimal packing looks like. It is difficult to find the optimal packing as there is no "simple" formula for the volume of an arbitrarily shaped cluster. Optimality (and non-optimality) is shown through appropriate estimates of the volume, using methods from convex geometry, such as the Brunn-Minkowski inequality, mixed Minkowski volumes and Steiner's formula. A crucial step towards a unified theory of both finite and infinite (lattice and non-lattice) sphere packings was the introduction of parametric densities by Jörg Wills in 1992. The parametric density takes into account the influence of the edges of the packing.
The definition of density used earlier concerns the volume of the convex hull of the spheres (or convex bodies) :
where is the convex hull of the midpoints of the spheres (instead of the sphere, we can also take an arbitrary convex body for ). For a linear arrangement (sausage), the convex hull is a line segment through all the midpoints of the spheres. The plus sign in the formula refers to Minkowski addition of sets, so that refers to the volume of the convex hull of the spheres.
This definition works in two dimensions, where Laszlo Fejes-Toth, Claude Rogers and others used it to formulate a unified theory of finite and infinite packings. In three dimensions, Wills gives a simple argument that such a unified theory is not possible based on this definition: The densest finite arrangement of coins in three dimensions is the sausage with . However, the optimal infinite arrangement is a hexagonal arrangement with , so the infinite value cannot be obtained as a limit of finite values. To solve this issue, Wills introduces a modification to the definition by adding a positive parameter :
allows the influence of the edges to be considered (giving the convex hull a certain thickness). This is then combined with methods from the theory of mixed volumes and geometry of numbers by Hermann Minkowski.
For each dimension there are parameter values and such that for the sausage is the densenst packing (for all integers ), while for and suffiricently large the cluster is densest. These parameters are dimension-specific. In two dimensions, so that there is a transition from sausages to clusters (sausage catastrophe).
There holds an inequality:
where the volume of the unit ball in dimensions is . For , we have and it is predicted that this holds for all dimensions, in which case the value of can be found from that of .
References
Euclidean solid geometry | Finite sphere packing | [
"Physics"
] | 2,140 | [
"Spacetime",
"Space",
"Euclidean solid geometry"
] |
70,573,417 | https://en.wikipedia.org/wiki/Leray%E2%80%93Schauder%20degree | In mathematics, the Leray–Schauder degree is an extension of the degree of a base point preserving continuous map between spheres or equivalently to boundary-sphere-preserving continuous maps between balls to boundary-sphere-preserving maps between balls in a Banach space , assuming that the map is of the form where is the identity map and is some compact map (i.e. mapping bounded sets to sets whose closure is compact).
The degree was invented by Jean Leray and Juliusz Schauder to prove existence results for partial differential equations.
References
Topology | Leray–Schauder degree | [
"Physics",
"Mathematics"
] | 115 | [
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
70,576,470 | https://en.wikipedia.org/wiki/Colored%20Coins | Colored Coins is an open-source protocol that allows users to represent and manipulate immutable digital resources on top of Bitcoin transactions. They are a class of methods for representing and maintaining real-world assets on the Bitcoin blockchain, which may be used to establish asset ownership. Colored coins are bitcoins with a mark on them that specifies what they may be used for. Colored coins have also been considered a precursor to NFTs.
Although bitcoins are fungible on the protocol level, they can be marked to be distinguished from other bitcoins. These marked coins have specific features that correspond to physical assets like vehicles and stocks, and owners may use them to establish their ownership of physical assets. Colored coins aim to lower transaction costs and complexity so that an asset's owner may transfer ownership as quickly as a Bitcoin transaction.
Colored coins are commonly referred to as meta coins because this imaginative coloring is the addition of metadata. This enables a portion of a digital representation of a physical item to be encoded into a Bitcoin address. The value of the colored coins is independent of the current prices of the bitcoin; instead, it is determined by the value of the underlying actual asset/service and the issuer's desire and capacity to redeem the colored coins in return for the equivalent actual asset or service.
History
Colored coins arose due to the necessity to generate new tokens and move assets on the Bitcoin network. These tokens can be used to represent any asset in the world, including equities, commodities, real estate, fiat currency, and even other cryptocurrencies.
Yoni Assia, the CEO of eToro, was the first to suggest Colored coins in an article published on March 27, 2012. In the article titled bitcoin 2.X (aka Colored bitcoin), Assia claimed that the initial specifications that bitcoins transmitted using the "Genesis Transaction" protocol are recognizable, distinctive, and trackable on the ledger. The idea was growing, and on forums such as Bitcointalk, the concept of colored coins started to take form and gain traction. This culminated in Meni Rosenfeld releasing a whitepaper detailing the colored currencies on December 4, 2012.
The next year, in 2013, Assia collaborated with Buterin and five others, Lior Hakim, and Meni Rosenfeld, Amos Meiri, Alex Mizrahi and Rotem Lev to write Color Coins — BitcoinX, which explored the potential possibilities of colored coins.
In 2013, the New Scientist magazine first acknowledged Colored Coins where Meiri describes for the first time the actual issuance of a share or a gold bar on the blockchain. In 2014, Colu was the first company to raise venture capital money to develop the Colored Coins protocol.
Development
Colored coins originated as an afterthought by Bitcoin miners. The blockchain's data space had been utilized to encode numerous metadata values. This unexpected data caused processing issues, causing the network to slow down. The Bitcoin team fixed the problem by including a 40-byte area for storing data as a transaction, as well as an encrypted ledger of transactions and information about the coin's genesis.
While bitcoin was developed to be a cryptocurrency, its scripting language makes it possible to associate metadata with individual transactions. By precisely tracing the origin of a particular bitcoin, it is possible to distinguish a group of bitcoins from the others, a process known as bitcoin coloring (a term that served as a basis to the name of the Colored Coins protocol).
Through the oversight of an issuing agent or a public agreement, special properties can be associated with colored bitcoins, giving them value beyond the currency's value. One way of looking at this is from the abstraction that there are two distinct layers on top of bitcoin: the lower layer referring to the transaction network based on cryptographic technology and an upper layer that constitutes a distribution network of values encapsulated in the design of colored coins.
Due to the fact that colored coins are implemented on top of the Bitcoin infrastructure, allow atomic transactions (exchanged for each other in a single transaction) and can be transferred without the involvement of a third party, they enable the decentralized exchange of items that would not be possible through traditional means.
To create colored coins, "colored" addresses must be created and stored in "colored" wallets controlled by color-aware clients such as Coinprism, Coloredcoins, through Colu, or CoinSpark. The "coloring" process is an abstract idea that indicates an asset description, some general instructions symbol, and a unique hash associated with the Bitcoin addresses.
In 2013, Flavien Charlon, the CEO of Coinprism, developed a Colored Coin Protocol that permitted the generation of colored currencies by employing specified settings in transaction inputs and outputs. This was Bitcoin's first working Colored Coin Protocol. This protocol, also known as the Open Assets Protocol, is open source and may be integrated into existing systems by anyone.
On July 3, 2014, ChromaWay developed the Enhanced Padded-Order-Based Coloring protocol (EPOBC), which simplified the process of manufacturing colored coins for developers, and was one of the first to employ Bitcoin Script's new OP RETURN function.
In January 2014, Colu created the ColoredCoins platforms and Colored Coins protocol allowing users to build digital assets on top of the Bitcoin blockchain using the Bitcoin 2.0 protocol. In 2016, Colu announced integration to Lightning Network expanding its Bitcoin L2 capabilities.
Layers of Colored Coins
Colored coin functions by adding a 4th layer to the Bitcoin blockchain.
1st Layer: Network
2nd Layer: Consensus
3rd Layer: Transaction
4th Layer: Watermark (color)
Before ERC token standards were created, the concept of using tokens to represent and monitor real-world items existed. Colored coins were the original notion for representing assets on the blockchain. They are not widely used because the transaction structure required to represent colored coins relies on unspent transaction outputs, which Ethereum-based blockchain systems do not support. The primary concept is to add an attribute (the color) to native transactions that specify the asset it symbolizes. For example, for the Bitcoin blockchain, each Satoshi (the lowest potential value of Bitcoin) might represent a separate item. This notion is mostly used to monitor ownership of tokens and, by extension, assets. There is promise in using colored coins as an effective way of tracing in production situations since the transactions can be merged or divided into new transactions and the color can be readily altered after each transaction. Finally, current tools, like as blockchain explorers, make it simple to view and analyze transactions.
The nature of colored coins makes them the first non-fungible tokens to be created on the Bitcoin blockchain, albeit with limited features. Colored coins are transferrable in what is known as atomic transactions. Atomic transactions are transactions that permit the direct peer-to-peer exchange of one token for another in a single transaction. In this way, colored coins allow traditional assets to be decentralized.
Transactions
Colored coin uses an open-source, decentralized peer-to-peer transaction protocol built on top of WEB 2.0. Despite being created to be a protocol for monetary transactions, one of the Bitcoin's advantages is a secure transaction protocol not controlled by a central authority. This is possible through the use of Blockchain, which maintains track of all Bitcoin transactions worldwide.
A transaction consists of:
A set of inputs such that each input has (a) a Transaction Hash and Output Index of a previous transaction carried out on that bitcoin and (b) a digital signature that serves as cryptographic proof that that input address authorizes the transaction.
An output set such that each output has (a) the bitcoin value to be transferred to that output and (b) a script that maps a single address to that output.
Staining and transferring
The manipulation of colored coins can be performed through several algorithms, which create a set of rules to be applied to the inputs and outputs of Bitcoin transactions:
At a given moment, a digital resource is associated with the output of a Bitcoin transaction, called Genesis Transactions. The output of this transaction (currency) belongs to the initial owner recorded in the system (in a case of a jewelry store associating its jewelry with digital resources, the newly colored coins will belong to the store).
When the resource is transferred or sold, the currency that belongs to the previous owner is consumed, while a new colored currency is created at the outgoing address of the transfer transaction.
When it is necessary to identify the owner of a coin, it is enough to evaluate the transaction history of that coin from its genesis transaction to the last transaction with unconsumed output. The Bitcoin blockchain has tracking of the public keys associated with each address, such that the owner of the coin can prove ownership by sending a message with the private key associated with that address.
Among these algorithms, the best known of them is the EPOBC. The EPOBC algorithm colors the coins by inserting a mark in the nSequence field of the first input of the transaction. It is important to note that the nSequence field is always present in Bitcoin transactions, but it is not used, so it does not generate an overhead for the coloring process. Examples of companies driving the EPOBC are ChromaWallet, Cuber, LHV and Funderbeam.
Genesis transactions
To issue new colors, it is necessary to release coins of that color through genesis transactions. In general, there are two cases to consider about genesis transactions:
Non-reissuable colors: In this case, the transaction inputs are irrelevant to the algorithm, since once the transaction is executed, the coin issuer has no power over them. So all that matters is the genesis transaction itself.
Reissuable colors: In this scenario, the issuer must choose a secure address to be the “Issuing Address” and set transaction entry 0 to come from that address. In a second moment, the issuer will be able to issue new units of that color through genesis transactions with the same secure address. It is important to note that an address can only be associated with a single color. Once an address emits a reissuable color, it will no longer be able to participate in coloring coins of other colors, not even non-reissuable colors.
Transfer transactions
Transfer transactions are used to send colored coins from one address to another. It is also possible to transfer coins of multiple colors in a single transfer transaction. Tagging-based coloring is the most well-known algorithm for this operation.
If colored coins are used as input for transactions that do not follow the transfer protocol, the value associated with their color is lost. Furthermore, their value can also be lost in a malformed transaction.
There are one or more colored inputs in a transfer transaction. Inputs do not need to be of the same color, e.g. "gold" and "silver" can be transferred within one transaction, which is beneficial for peer-to-peer trade. The order of inputs and outputs within a transaction, as it is used for non-ambiguous decoding.
Alternative coloring algorithms
Determining a way to transfer colored coins from one Bitcoin address to another is the most complex part of the colored coins protocol. For transactions with only one input and one output, it is easy to determine that the color of the output coins is the same color that was received by the input address, since a Bitcoin address can only handle a single color value. However, in transactions with multiple inputs and outputs, determining which colored coins of inputs correspond to which outputs become a more complex task. For that, there are several algorithms that propose to solve this problem, each one with its peculiarities.
Order based coloring is the first and simplest coloring algorithm. An intuitive way to understand this algorithm is to consider that the transaction has a width proportional to its total input amount. On the left side there are inputs, each a width proportional to its value, on the right side there are outputs with values proportional to their bitcoin values. Assume, then, that colored water flows in a straight line from left to right. The color of an outlet will be the color of the water arriving at it, or colorless if multiple-color coins arrive at that outlet. A single Bitcoin address cannot handle coins of different colors.
Padded order based coloring is a slightly more complex algorithm than the OBC (Order based coloring) algorithm. In essence, the algorithm has the same principle as the OBC, however, treating each output as containing a pad of a certain number of colorless bitcoins, with the colored coins following them.
Applications
The Bitcoin network's decentralized nature indicates that its security does not need dependence on trusted parties and that its players may operate anonymously provided adequate safeguards are adopted. Colored Coins protocols adoption enables the integration of decentralized stock exchanges and other financial functionality into Bitcoin such as certifying credentials (like academic certificates), or establishing the existence of digital documents.
Smart property: For example, a product rental company can release a colored coin to represent their products, such as a car. Through an application, the company could configure a control message that would send a message signed by the private key that currently has the colored coin. In this way, its users could transfer the vehicle's digital key to each other, by transferring the currency. This protocol feature may be used in land management by indicating ownership of a piece of land with a single or several tokens. The token's information may be used to maintain public registry parameters such as size, GPS locations, year created, and so on. The land administrator may encrypt ownership details such as titles or identification so that only individuals with the right private key can see the information. Anyone with an internet connection can publicly verify and trace the ownership of each token using block explorer software.
Issue of shares: A company can issue its shares through colored coins, taking advantage of the Bitcoin infrastructure to manage activities such as voting, exchange and payment of dividends. Colored coins may also be used to form Distributed Collaborative Organizations (DCOs) and Decentralized Autonomous Organizations (DAOs), which are acting as virtual corporations with shareholders. In such cases, the blockchain may assist in keeping track of a company's ownership structure as well as creating and distributing DCO shares in a transparent and safe manner. Examples: community currency or corporate currency, deposit representation, access and subscription services.
Issue of coupons: A company can issue promotional coupons or loyalty points among its customers in the form of colored coins.
Digital collectibles: Decentralized management of digital resources. Similar to how collectors acquire and sell paintings, colored coins enable managing digital resources in a similar way, such as e-books, music, digital games and software, guaranteeing ownership of the resource to the owner of the coin.
As long as the provider's identity is protected by the legal framework, colored coins may be used to transfer any digitally transferable right. The circulation is based on a cryptographic signature. The contract and any payments linked to it are recorded on the blockchain using a unique cryptographic key that identifies the rightful owner of the currency. Parties may use an alias to sign up for the protocol under legally permissible circumstances. In reality, the secret cryptographic key enables the system to validate subscribers' digital identities without disclosing any personal information.
Private key holders might then transfer the asset directly to other persons or corporations through a public blockchain.
Users may trade and manage all asset classes in a somewhat decentralized framework with a minute amount of colored Bitcoin, according to marketing literature, rather than needing to send hundreds or even thousands of bitcoins in return for an item or service.
Deterministic contracts: A person or company can issue contracts by pre-scheduling a payment, such as stock options.
Bonds: A special case of a deterministic contract, bonds can be issued with a down payment amount and an installment schedule in bitcoin, another currency or commodity.
Decentralized digital representation of physical resources: It means tying physical resources, such as physical objects, commodities, or traditional currencies, to digital resources and proving ownership of those objects in that way. NFT tokens use this approach, selling ownership of artworks and even living properties.
Сolored coin wallet
Colored coins can be handled through wallets in the same manner as Bitcoin monetary resources can be managed through bitcoin wallets. Wallets are used to manage the addresses associated with each pair of keys (public and private) of a Bitcoin user, as well as the transactions associated with their set of addresses. Rather than dealing with cryptocurrencies, colored coin wallets add a layer of abstraction, managing digital assets, such as stocks, altcoins, which are created on the Blockchain, intellectual property and other resources.
While bitcoin wallets are required to use a unique Bitcoin address for each transaction, colored coin wallets frequently reuse their addresses in order to re-issue coins of the same color.
To issue colored coins, colored addresses must be generated and stored in colored wallets administered by a color-aware client such as Colu or Coinprism.
Protocol implementation
Protocol implementations are associated with wallet software, so that the end user does not have to be concerned about transaction structuring or manual resource manipulation. There is, however, some concern about the interoperability of the existing implementations, as colored coins transactions are operationalized using the variety of different algorithms. Transactions between unsupported wallets may result in the loss of currency coloring features.
Colored coins require a unified wallet that can distinguish between items other than bitcoins. In June 2015, a Torrent-based version of Colored Coins was developed by Colu to cover the protocol's use while Bitcoin has not yet been widely adopted by the market. Making the protocol compatible amongst different Bitcoin implementations is one approach to increase the usage of Bitcoin for digital asset management.
Legal aspects
A smart property or an item with an off-chain identifier that is transferred via blockchain remains subject to legal interpretation. Colored coins and other virtual currency are presently not recognized as evidence of ownership by any government agency in the United States. For financial institutions, the lack of an identifiable identity across on-and off-chain settings is still a barrier.
There's a legal challenge with regard to the transfer of common stock ownership using blockchain. Due to the fact that the rights to receive notifications, vote, receive dividends, and exercise appraisal rights are restricted to registered owners, establishing ownership is likely even more critical for blockchain stock.
Due to the extralegal nature of colored coin transactions such as NFTs, they frequently result in an informal exchange of ownership over the item with no legal basis for enforcement, frequently conferring nothing more than usage as a status symbol.
Limitations
As virtual tokens colored coins cannot compel the real world to meet the obligations made when they were issued. They can represent something external, in the actual world, such as a corporate action or debt repayment obligation. This suggests that they are issued by a person or entity, which carries some level of risk. That the issuer does not comply with its related obligations or there may even be fraud and that those currencies may not represent anything actual.
They are unable to prevent a user from changing the underlying cryptocurrency in a way that destroys the extra information. Using virtual tokens in a transaction that does not conform with the rules of colored currencies (stricter than the rules of blockchain transactions and not mandated by it) destroys the additional meaning, leaving only the token’s monetary worth on the blockchain.
It is impossible to store the semantics of information indicating what a token represents. For instance, the blockchain can record the number of concert tickets that have been issued and the addresses of their owners, but it cannot encode the fact that they represent allowed access to a specific concert at a specific time. Metadata storage and processing require an external system, such as Open-Transactions. Open-Transactions is a free software library that implements cryptographically secure financial transactions using financial cryptographic primitives. It can be utilized to issue stocks, pay dividends, purchase and sell shares, etc.
The speed of transactions and the capabilities of the smart contract procedures utilized by virtual tokens are equivalent to those of the blockchain they are based on.
Due to the nature of the Bitcoin host network, adding an additional layer is neither simple nor scalable. Additionally, it inherits all of the information security and safety concerns of the host blockchain. Developing a comprehensive protocol that incorporates asset issuance and native tracking may be a more rigorous and scalable method for creating a blockchain-based asset-tracking system.
Concerns
Opposition to the use of Colored Coins for the treatment of abstracted resources on Bitcoin mainly originates in the financial and banking sectors. It is argued that the proof-of-work blockchain-based security system cannot be exported to a regulated financial resolution environment. As a result, there is no legal framework for Colored Coins' transactions. Finally, there are some regulatory concerns with the coin coloring method. According to institutions that criticize the decentralized transaction system, the legal effect of an individual or entity transferring ownership of a given object to another individual or entity through Bitcoin abstractions is still uncertain.
See also
Bitcoin
Blockchain
Digital currencies
Non-fungible token
Bitcoin network
Smart contract
Altcoins
References
Cryptocurrencies
Bitcoin
Blockchains
Cryptocurrency projects
Cryptography
Payment systems
Currencies introduced in 2012
2012 establishments
Cross-platform software
Decentralization
Application layer protocols
Free and open-source software
Free software programmed in C++ | Colored Coins | [
"Mathematics",
"Engineering"
] | 4,585 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
74,836,769 | https://en.wikipedia.org/wiki/Julie%20Diani | Julie Diani is a French academic specialised in the characterization and simulation of polymeric materials. She is the CNRS Research Director at École Polytechnique’s Solid Mechanics Laboratory, and holder of the Arkema Design and Modeling of Innovative Materials Chair.
Education
Diani earned a B.S. in Applied Mathematics and her S.M. Degree in Mechanical Engineering at the Pierre et Marie Curie University. She completed her doctoral degree in Materials Science and Engineering at Ecole Normale Superieure de Cachan.
Career
She joined CNRS in 2000. From 2004 to 2006, she was a visiting researcher at the University of Colorado, Boulder.
Diani's most cited works include a review of the Mullins effect and a constitutive model for Shape-memory polymers.
Personal life
Diani is the daughter of two math teachers, and she does Judo and biking.
Awards and recognition
2015 - Sparks–Thomas award from the ACS Rubber Division
References
Polymer scientists and engineers
Women materials scientists and engineers
Year of birth missing (living people)
Living people
Pierre and Marie Curie University alumni
École Normale Supérieure alumni
Research directors of the French National Centre for Scientific Research | Julie Diani | [
"Chemistry",
"Materials_science",
"Technology"
] | 237 | [
"Polymer scientists and engineers",
"Physical chemists",
"Materials scientists and engineers",
"Polymer chemistry",
"Women materials scientists and engineers",
"Women in science and technology"
] |
74,850,217 | https://en.wikipedia.org/wiki/Collective%E2%80%93amoeboid%20transition | The collective–amoeboid transition (CMT) is a process by which collective multicellular groups dissociate into amoeboid single cells following the down-regulation of integrins. CMTs contrast with epithelial–mesenchymal transitions (EMT) which occur following a loss of E-cadherin. Like EMTs, CATs are involved in the invasion of tumor cells into surrounding tissues, with amoeboid movement more likely to occur in soft extracellular matrix (ECM) and mesenchymal movement in stiff ECM. Although once differentiated, cells typically do not change their migration mode, EMTs and CMTs are highly plastic with cells capable of interconverting between them depending on intracelluar regulatory signals and the surrounding ECM.
CATs are the least common transition type in invading tumor cells, although they are noted in melanoma explants.
See also
Collective cell migration
Dedifferentiation
Invasion (cancer)
References
Animal developmental biology
Cancer research
Cell movement
Cellular processes
Tissue engineering | Collective–amoeboid transition | [
"Chemistry",
"Engineering",
"Biology"
] | 216 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Cellular processes",
"Medical technology"
] |
69,086,804 | https://en.wikipedia.org/wiki/Strepsilin | Strepsilin is a chemical found in lichens. It produces an emerald green colour in the C test. It is a dibenzofuran dimer, with hydroxy, oxy and methyl side groups. It is named after Cladonia strepsilis. Strepsilin was discovered by Wilhelm Zopf in 1903. The structure of strepsilin was determined by Shoji Shibata.
Properties
Strepsilin is degraded in alkali to 1-methyl-3,7-dihydroxydibenzofuran.
Strepsilin melts at 324 °C.
Occurrence
Strepsilin is found in some Cladonia species. It is also found in Siphula and Stereocaulon azoreum.
References
Dibenzofurans
Furanones
Isobenzofurans
Hydroxyarenes
Lichen products
Heterocyclic compounds with 4 rings
Diols | Strepsilin | [
"Chemistry"
] | 196 | [
"Natural products",
"Lichen products"
] |
69,088,046 | https://en.wikipedia.org/wiki/Probabilistic%20numerics | Probabilistic numerics is an active field of study at the intersection of applied mathematics, statistics, and machine learning centering on the concept of uncertainty in computation. In probabilistic numerics, tasks in numerical analysis such as finding numerical solutions for integration, linear algebra, optimization and simulation and differential equations are seen as problems of statistical, probabilistic, or Bayesian inference.
Introduction
A numerical method is an algorithm that approximates the solution to a mathematical problem (examples below include the solution to a linear system of equations, the value of an integral, the solution of a differential equation, the minimum of a multivariate function). In a probabilistic numerical algorithm, this process of approximation is thought of as a problem of estimation, inference or learning and realised in the framework of probabilistic inference (often, but not always, Bayesian inference).
Formally, this means casting the setup of the computational problem in terms of a prior distribution, formulating the relationship between numbers computed by the computer (e.g. matrix-vector multiplications in linear algebra, gradients in optimization, values of the integrand or the vector field defining a differential equation) and the quantity in question (the solution of the linear problem, the minimum, the integral, the solution curve) in a likelihood function, and returning a posterior distribution as the output. In most cases, numerical algorithms also take internal adaptive decisions about which numbers to compute, which form an active learning problem.
Many of the most popular classic numerical algorithms can be re-interpreted in the probabilistic framework. This includes the method of conjugate gradients, Nordsieck methods, Gaussian quadrature rules, and quasi-Newton methods. In all these cases, the classic method is based on a regularized least-squares estimate that can be associated with the posterior mean arising from a Gaussian prior and likelihood. In such cases, the variance of the Gaussian posterior is then associated with a worst-case estimate for the squared error.
Probabilistic numerical methods promise several conceptual advantages over classic, point-estimate based approximation techniques:
They return structured error estimates (in particular, the ability to return joint posterior samples, i.e. multiple realistic hypotheses for the true unknown solution of the problem)
Hierarchical Bayesian inference can be used to set and control internal hyperparameters in such methods in a generic fashion, rather than having to re-invent novel methods for each parameter
Since they use and allow for an explicit likelihood describing the relationship between computed numbers and target quantity, probabilistic numerical methods can use the results of even highly imprecise, biased and stochastic computations. Conversely, probabilistic numerical methods can also provide a likelihood in computations often considered "likelihood-free" elsewhere
Because all probabilistic numerical methods use essentially the same data type – probability measures – to quantify uncertainty over both inputs and outputs they can be chained together to propagate uncertainty across large-scale, composite computations
Sources from multiple sources of information (e.g. algebraic, mechanistic knowledge about the form of a differential equation, and observations of the trajectory of the system collected in the physical world) can be combined naturally and inside the inner loop of the algorithm, removing otherwise necessary nested loops in computation, e.g. in inverse problems.
These advantages are essentially the equivalent of similar functional advantages that Bayesian methods enjoy over point-estimates in machine learning, applied or transferred to the computational domain.
Numerical tasks
Integration
Probabilistic numerical methods have been developed for the problem of numerical integration, with the most popular method called Bayesian quadrature.
In numerical integration, function evaluations at a number of points are used to estimate the integral of a function against some measure . Bayesian quadrature consists of specifying a prior distribution over and conditioning this prior on to obtain a posterior distribution over , then computing the implied posterior distribution on . The most common choice of prior is a Gaussian process as this allows us to obtain a closed-form posterior distribution on the integral which is a univariate Gaussian distribution. Bayesian quadrature is particularly useful when the function is expensive to evaluate and the dimension of the data is small to moderate.
Optimization
Probabilistic numerics have also been studied for mathematical optimization, which consist of finding the minimum or maximum of some objective function given (possibly noisy or indirect) evaluations of that function at a set of points.
Perhaps the most notable effort in this direction is Bayesian optimization, a general approach to optimization grounded in Bayesian inference. Bayesian optimization algorithms operate by maintaining a probabilistic belief about throughout the optimization procedure; this often takes the form of a Gaussian process prior conditioned on observations. This belief then guides the algorithm in obtaining observations that are likely to advance the optimization process. Bayesian optimization policies are usually realized by transforming the objective function posterior into an inexpensive, differentiable acquisition function that is maximized to select each successive observation location. One prominent approach is to model optimization via Bayesian sequential experimental design, seeking to obtain a sequence of observations yielding the most optimization progress as evaluated by an appropriate utility function. A welcome side effect from this approach is that uncertainty in the objective function, as measured by the underlying probabilistic belief, can guide an optimization policy in addressing the classic exploration vs. exploitation tradeoff.
Local optimization
Probabilistic numerical methods have been developed in the context of stochastic optimization for deep learning, in particular to address main issues such as
learning rate tuning and line searches,
batch-size selection, early stopping,
pruning, and first- and second-order search directions.
In this setting, the optimization objective is often an empirical risk of the form defined by a dataset , and a loss that quantifies how well a predictive model parameterized by performs on predicting the target from its corresponding input .
Epistemic uncertainty arises when the dataset size is large and cannot be processed at once meaning that local quantities (given some ) such as the loss function itself or its gradient cannot be computed in reasonable time.
Hence, generally mini-batching is used to construct estimators of these quantities on a random subset of the data. Probabilistic numerical methods model this uncertainty explicitly and allow for automated decisions and parameter tuning.
Linear algebra
Probabilistic numerical methods for linear algebra
have primarily focused on solving systems of linear equations of the form and the computation of determinants .
A large class of methods are iterative in nature and collect information about the linear system to be solved via repeated matrix-vector multiplication with the system matrix with different vectors .
Such methods can be roughly split into a solution- and a matrix-based perspective, depending on whether belief is expressed over the solution of the linear system or the (pseudo-)inverse of the matrix .
The belief update uses that the inferred object is linked to matrix multiplications or via and .
Methods typically assume a Gaussian distribution, due to its closedness under linear observations of the problem. While conceptually different, these two views are computationally equivalent and inherently connected via the right-hand-side through .
Probabilistic numerical linear algebra routines have been successfully applied to scale Gaussian processes to large datasets. In particular, they enable exact propagation of the approximation error to a combined Gaussian process posterior, which quantifies the uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
Ordinary differential equations
Probabilistic numerical methods for ordinary differential equations , have been developed for initial and boundary value problems. Many different probabilistic numerical methods designed for ordinary differential equations have been proposed, and these can broadly be grouped into the two following categories:
Randomisation-based methods are defined through random perturbations of standard deterministic numerical methods for ordinary differential equations. For example, this has been achieved by adding Gaussian perturbations on the solution of one-step integrators or by perturbing randomly their time-step. This defines a probability measure on the solution of the differential equation that can be sampled.
Gaussian process regression methods are based on posing the problem of solving the differential equation at hand as a Gaussian process regression problem, interpreting evaluations of the right-hand side as data on the derivative. These techniques resemble to Bayesian cubature, but employ different and often non-linear observation models. In its infancy, this class of methods was based on naive Gaussian process regression. This was later improved (in terms of efficient computation) in favor of GaussMarkov priors modeled by the stochastic differential equation , where is a -dimensional vector modeling the first derivatives of , and where is a -dimensional Brownian motion. Inference can thus be implemented efficiently with Kalman filtering based methods.
The boundary between these two categories is not sharp, indeed a Gaussian process regression approach based on randomised data was developed as well. These methods have been applied to problems in computational Riemannian geometry, inverse problems, latent force models, and to differential equations with a geometric structure such as symplecticity.
Partial differential equations
A number of probabilistic numerical methods have also been proposed for partial differential equations. As with ordinary differential equations, the approaches can broadly be divided into those based on randomisation, generally of some underlying finite-element mesh and those based on Gaussian process regression.
Probabilistic numerical PDE solvers based on Gaussian process regression recover classical methods on linear PDEs for certain priors, in particular methods of mean weighted residuals, which include Galerkin methods, finite element methods, as well as spectral methods.
History and related fields
The interplay between numerical analysis and probability is touched upon by a number of other areas of mathematics, including average-case analysis of numerical methods, information-based complexity, game theory, and statistical decision theory. Precursors to what is now being called "probabilistic numerics" can be found as early as the late 19th and early 20th century.
The origins of probabilistic numerics can be traced to a discussion of probabilistic approaches to polynomial interpolation by Henri Poincaré in his Calcul des Probabilités.
In modern terminology, Poincaré considered a Gaussian prior distribution on a function , expressed as a formal power series with random coefficients, and asked for "probable values" of given this prior and observations for .
A later seminal contribution to the interplay of numerical analysis and probability was provided by Albert Suldin in the context of univariate quadrature. The statistical problem considered by Suldin was the approximation of the definite integral of a function , under a Brownian motion prior on , given access to pointwise evaluation of at nodes . Suldin showed that, for given quadrature nodes, the quadrature rule with minimal mean squared error is the trapezoidal rule; furthermore, this minimal error is proportional to the sum of cubes of the inter-node spacings. As a result, one can see the trapezoidal rule with equally-spaced nodes as statistically optimal in some sense — an early example of the average-case analysis of a numerical method.
Suldin's point of view was later extended by Mike Larkin.
Note that Suldin's Brownian motion prior on the integrand is a Gaussian measure and that the operations of integration and of point wise evaluation of are both linear maps.
Thus, the definite integral is a real-valued Gaussian random variable.
In particular, after conditioning on the observed pointwise values of , it follows a normal distribution with mean equal to the trapezoidal rule and variance equal to .
This viewpoint is very close to that of Bayesian quadrature, seeing the output of a quadrature method not just as a point estimate but as a probability distribution in its own right.
As noted by Houman Owhadi and collaborators, interplays between numerical approximation and statistical inference can also be traced back to Palasti and Renyi, Sard, Kimeldorf and Wahba (on the correspondence between Bayesian estimation and spline smoothing/interpolation) and Larkin (on the correspondence between Gaussian process regression and numerical approximation). Although the approach of modelling a perfectly known function as a sample from a random process may seem counterintuitive, a natural framework for understanding it can be found in information-based complexity (IBC), the branch of computational complexity founded on the observation that numerical implementation requires computation with partial information and limited resources. In IBC, the performance of an algorithm operating on incomplete information can be analyzed in the worst-case or the average-case (randomized) setting with respect to the missing information. Moreover, as Packel observed, the average case setting could be interpreted as a mixed strategy in an adversarial game obtained by lifting a (worst-case) minmax problem to a minmax problem over mixed (randomized) strategies. This observation leads to a natural connection between numerical approximation and Wald's decision theory, evidently influenced by von Neumann's theory of games. To describe this connection consider the optimal recovery setting of Micchelli and Rivlin in which one tries to approximate an unknown function from a finite number of linear measurements on that function. Interpreting this optimal recovery problem as a zero-sum game where Player I selects the unknown function and Player II selects its approximation, and using relative errors in a quadratic norm to define losses, Gaussian priors emerge as optimal mixed strategies for such games, and the covariance operator of the optimal Gaussian prior is determined by the quadratic norm used to define the relative error of the recovery.
Software
ProbNum: Probabilistic Numerics in Python.
ProbNumDiffEq.jl: Probabilistic numerical ODE solvers based on filtering implemented in Julia.
Emukit: Adaptable Python toolbox for decision-making under uncertainty.
BackPACK: Built on top of PyTorch. It efficiently computes quantities other than the gradient.
See also
Average-case analysis
Information-based complexity
Uncertainty quantification
References
Applied statistics
Machine learning
Applied mathematics | Probabilistic numerics | [
"Mathematics",
"Engineering"
] | 2,931 | [
"Artificial intelligence engineering",
"Applied mathematics",
"Applied statistics",
"Machine learning"
] |
69,092,857 | https://en.wikipedia.org/wiki/Plasmonic%20catalysis | In chemistry, plasmonic catalysis is a type of catalysis that uses plasmons to increase the rate of a chemical reaction. A plasmonic catalyst is made up of a metal nanoparticle surface (usually gold, silver, or a combination of the two) which generates localized surface plasmon resonances (LSPRs) when excited by light. These plasmon oscillations create an electron-rich region near the surface of the nanoparticle, which can be used to excite the electrons of nearby molecules.
Similar to photocatalysts, plasmonic catalysts can transfer their excitation energy to reactant molecules through resonance energy transfer (RET). Unlike photocatalysts, plasmonic catalysts can also excite reactant molecules by the release of hot carrier electrons which have a high enough energy to completely dissociate from the metal surface. The energy of these hot carrier electrons can be altered by changing the wavelength of light striking the surface and the size of the nanoparticles present, which allows the hot electrons to take on the excitation state needed to catalyze multiple different reactions.
Although the field of plasmonic catalysis is still in its infancy, there are clear advantages to utilizing a plasmon-active surface over traditional photocatalysts. Their ability to utilize energy from near-infrared, visible, and ultraviolet light gives plasmon surfaces higher light-capturing efficiency than photocatalysts, which can only utilize ultraviolet light, and the larger possible energy range of the electromagnetic field and emitted electrons make the resulting catalytic effects both broadly applicable and highly tunable.
Mechanism
Broadly speaking, plasmonic catalysis increases the reaction rate through two major pathways. The first of these is through the generation of an electromagnetic field during plasmon oscillations. This field lowers the activation energy of the reaction through excitation of the reactants electrons by resonance energy transfer. It can also provide localized transition state stabilization, further increasing the rate of reaction.
The second pathway is through the generation of hot carrier electron/phonon pairs. When a plasmon is generated, some electrons may have the energy to break completely free of the nanoparticle's electron shells. These highly excited electrons can then excite reactant electrons in the highest occupied molecular orbital or fill the lowest unoccupied molecular orbital, raising the energy of the molecule and allowing for a lower energy transition state. In most cases, these hot electrons do not find a reactant molecule to excite and instead fill the phonon and return to a ground state energy. The excess energy from the process is released as thermal energy, creating a localized temperature increase which can also increase the rate of reaction.
Examples
The photocatalytic electrolysis of water has been shown to be up to 66 times more efficient when using a gold nanoparticle surface.
The rate of demethylation of methylene blue by a Titanium dioxide photocatalyst has been increased sevenfold in the presence of silver nanoparticles.
The plasmonically catalyzed oxidation of several common gases- including carbon monoxide, ammonia, and oxygen- can occur at far lower temperatures than are normally required due to the strong catalytic effects of plasmonic surfaces when excited by visible light.
Recently hybrid plasmonic nanomaterials started being explored for organic synthesis or the production of solar fuels.
References
Nanotechnology
Catalysis | Plasmonic catalysis | [
"Chemistry",
"Materials_science",
"Engineering"
] | 722 | [
"Catalysis",
"Nanotechnology",
"Chemical kinetics",
"Materials science"
] |
62,446,290 | https://en.wikipedia.org/wiki/Guignardia%20festiva | Guignardia festiva is a plant pathogen that has been recorded on Sumbaviopsis albicans.
References
Fungal plant pathogens and diseases
Botryosphaeriales
Fungi described in 1912
Taxa named by Hans Sydow
Taxa named by Paul Sydow
Fungus species | Guignardia festiva | [
"Biology"
] | 58 | [
"Fungi",
"Fungus species"
] |
62,457,740 | https://en.wikipedia.org/wiki/Structural%20Ramsey%20theory | In mathematics, structural Ramsey theory is a categorical generalisation of Ramsey theory, rooted in the idea that many important results of Ramsey theory have "similar" logical structures. The key observation is noting that these Ramsey-type theorems can be expressed as the assertion that a certain category (or class of finite structures) has the Ramsey property (defined below).
Structural Ramsey theory began in the 1970s with the work of Nešetřil and Rödl, and is intimately connected to Fraïssé theory. It received some renewed interest in the mid-2000s due to the discovery of the Kechris–Pestov–Todorčević correspondence, which connected structural Ramsey theory to topological dynamics.
History
is given credit for inventing the idea of a Ramsey property in the early 70s. The first publication of this idea appears to be Graham, Leeb and Rothschild's 1972 paper on the subject. Key development of these ideas was done by Nešetřil and Rödl in their series of 1977 and 1983 papers, including the famous Nešetřil–Rödl theorem. This result was reproved independently by Abramson and Harrington, and further generalised by . More recently, Mašulović and Solecki have done some pioneering work in the field.
Motivation
This article will use the set theory convention that each natural number can be considered as the set of all natural numbers less than it: i.e. . For any set , an -colouring of is an assignment of one of labels to each element of . This can be represented as a function mapping each element to its label in (which this article will use), or equivalently as a partition of into pieces.
Here are some of the classic results of Ramsey theory:
(Finite) Ramsey's theorem: for every , there exists such that for every -colouring of all the -element subsets of , there exists a subset , with , such that is -monochromatic.
(Finite) van der Waerden's theorem: for every , there exists such that for every -colouring of , there exists a -monochromatic arithmetic progression of length .
Graham–Rothschild theorem: fix a finite alphabet . A -parameter word of length over is an element , such that all of the appear, and their first appearances are in increasing order. The set of all -parameter words of length over is denoted by . Given and , we form their composition by replacing every occurrence of in with the th entry of .Then, the Graham–Rothschild theorem states that for every , there exists such that for every -colouring of all the -parameter words of length , there exists , such that (i.e. all the -parameter subwords of ) is -monochromatic.
(Finite) Folkman's theorem: for every , there exists such that for every -colouring of , there exists a subset , with , such that , and is -monochromatic.
These "Ramsey-type" theorems all have a similar idea: we fix two integers and , and a set of colours . Then, we want to show there is some large enough, such that for every -colouring of the "substructures" of size inside , we can find a suitable "structure" inside , of size , such that all the "substructures" of with size have the same colour.
What types of structures are allowed depends on the theorem in question, and this turns out to be virtually the only difference between them. This idea of a "Ramsey-type theorem" leads itself to the more precise notion of the Ramsey property (below).
The Ramsey property
Let be a category. has the Ramsey property if for every natural number , and all objects in , there exists another object in , such that for every -colouring , there exists a morphism which is -monochromatic, i.e. the set
is -monochromatic.
Often, is taken to be a class of finite -structures over some fixed language , with embeddings as morphisms. In this case, instead of colouring morphisms, one can think of colouring "copies" of in , and then finding a copy of in , such that all copies of in this copy of are monochromatic. This may lend itself more intuitively to the earlier idea of a "Ramsey-type theorem".
There is also a notion of a dual Ramsey property; has the dual Ramsey property if its dual category has the Ramsey property as above. More concretely, has the dual Ramsey property if for every natural number , and all objects in , there exists another object in , such that for every -colouring , there exists a morphism for which is -monochromatic.
Examples
Ramsey's theorem: the class of all finite chains, with order-preserving maps as morphisms, has the Ramsey property.
van der Waerden's theorem: in the category whose objects are finite ordinals, and whose morphisms are affine maps for , , the Ramsey property holds for .
Hales–Jewett theorem: let be a finite alphabet, and for each , let be a set of variables. Let be the category whose objects are for each , and whose morphisms , for , are functions which are rigid and surjective on . Then, has the dual Ramsey property for (and , depending on the formulation).
Graham–Rothschild theorem: the category defined above has the dual Ramsey property.
The Kechris–Pestov–Todorčević correspondence
In 2005, Kechris, Pestov and Todorčević discovered the following correspondence (hereafter called the KPT correspondence) between structural Ramsey theory, Fraïssé theory, and ideas from topological dynamics.
Let be a topological group. For a topological space , a -flow (denoted ) is a continuous action of on . We say that is extremely amenable if any -flow on a compact space admits a fixed point , i.e. the stabiliser of is itself.
For a Fraïssé structure , its automorphism group can be considered a topological group, given the topology of pointwise convergence, or equivalently, the subspace topology induced on by the space with the product topology. The following theorem illustrates the KPT correspondence:Theorem (KPT). For a Fraïssé structure , the following are equivalent:
The group of automorphisms of is extremely amenable.
The class has the Ramsey property.
See also
Ramsey theory
Fraïssé's theorem
Age (model theory)
References
Category theory
Ramsey theory
Model theory | Structural Ramsey theory | [
"Mathematics"
] | 1,337 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical logic",
"Mathematical objects",
"Combinatorics",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations",
"Model theory",
"Ramsey theory"
] |
62,459,034 | https://en.wikipedia.org/wiki/Wac%C5%82aw%20Marzantowicz | Wacław Bolesław Marzantowicz is a Polish mathematician known for his contributions in number theory and topology. He was President of the Polish Mathematical Society from 2014 to 2019.
Biography
In 1967 he became the finalist of the 18th Mathematical Olympiad. In 1972, he graduated in mathematics at Adam Mickiewicz University in Poznań. He obtained his doctorate in Institute of Mathematics of the Polish Academy of Sciences in 1977, based on the work Lefschetz Numbers of Maps Commuting with an Action of a Group written under the direction Kazimierz Gęba. He got habilitation there in 1991, based on the work Invariant topology methods used in variational problems.
From 1993 to 1996, he was the director of the Institute of Mathematics University of Gdańsk. Since 1996, he has been working at Faculty of Mathematics and Computer Science at the Adam Mickiewicz University in Poznań, where he heads the Department of Geometry and Topology. In 2002 he received the title of professor of mathematics. References to his papers can be found in mathematical databases.
From 1993 to 1996, he was the president of the Gdańsk Branch of the Polish Mathematical Society (PMS) and then its vice president (2011–2013). Since 2014, he has been the president of the Polish Mathematical Society.
He became the joint recipient of the Stefan Banach Prize of the Polish Mathematical Society (alongside Jerzy Jezierski).
Further reading
Jerzy Jezierski; Wacław Marzantowicz, Homotopy methods in topological fixed and periodic points theory. Topological Fixed Point Theory and Its Applications, 3. Springer, Dordrecht, 2006. xii+319 pp. ; , .
Złota księga nauk ekonomicznych, prawnych i ścisłych 2005, wyd. Gliwice 2005, p. 205
References
1950 births
Living people
Number theorists
Polish mathematicians | Wacław Marzantowicz | [
"Mathematics"
] | 384 | [
"Number theorists",
"Number theory"
] |
62,459,433 | https://en.wikipedia.org/wiki/Thomas%20Arthur%20Rickard | T. A. Rickard (1864–1953), formally known as Thomas Arthur Rickard, was born on 29 August 1864 in Italy. Rickard's parents were British, and he became a mining engineer practising in the United States, Europe and Australia. He was also a publisher and author on mine engineering subjects.
Biography
Family and education
Thomas Arthur Rickard was born in Crotone, Italy, the son of Thomas Rickard, a Cornish mining engineer. His grandfather was a Cornish miner, Captain James Rickard. His cousin Tom Rickard was mayor of Berkeley, California at the time of the 1906 San Francisco earthquake and fire. He was educated in Russia and England. In 1882 Rickard entered the Royal School of Mines, London from which he graduated in 1885.
Career
1885 Assayer, British mining firm, Idaho Springs, Colorado
1886 Assistant Manager, California Gold Mining Co., Colorado
1887 Manager, Union Gold Mine, San Andreas, Calaveras County, California
1889-1891 Consultant investigating mines in England and Australia
1891 In charge, Silver/Lead/Gold mines, French Alps/Isere district
1892-1893 Investigating mines in Western U.S.A.
1894 Manager, Enterprise Mine, Colorado
1895-1901 State Geologist, Colorado - appointed by Governor McIntyre and re-appointed by the next two governors
1897-1898 Consultant investigating mines in Australia and Canada and other work.
1903 Editor-in-chief, Engineering and Mining Journal, New York
In 1903 W.E. Ford published an article in the American Journal of Science naming a new mineral Rickardite after Rickard.
1905 purchased Mining and Scientific Press, San Francisco
1906-1909 Editor, Mining and Scientific Press, San Francisco
1909-1915 Founding Editor, Mining Magazine, London
1915-1922 Editor, Mining and Scientific Press, San Francisco
1922-1925 contributing editor, Engineering and Mining Journal, following the amalgamation of Mining and Scientific Press with that Journal
1925- Devoted his time to writing
Death
Rickard died in Oak Bay, British Columbia on 15 August 1953.
Memberships and awards
Institution of Mining and Metallurgy
1896 elected Member
1903-1909 Member of Council
1932 awarded Gold Medal "in recognition of his services in the general advancement of mining engineering, with special reference to his contributions to technical and historical literature"
1948 made Honorary Member "in recognition of his long and valued services to the mining and metallurgical profession and to the Institution"
Canadian Institute of Mining and Metallurgy
Member
University of Colorado
Honorary D.Sc.
Royal School of Mines (Old Students’) Association
1913 Founder
First Honorary Secretary
American Institute of Mining and Metallurgical Engineers (AIME)
1935 made Honorary Member
Published works
‘Minerals which accompany gold and their bearing upon the richness of ore deposits’ Trans I.M.M., vol. 6, 1897-8
‘Cripple Creek goldfield’ Trans I.M.M., vol. 8, 1899-1900
A guide to technical writing (1908)
'Across the San Juan Mountains', 1907, Dewey Publishing Company, San Francisco
‘Standardization of English in technical literature’ Trans I.M.M., vol. 19, 1909–10
‘Domes of Nova Scotia’ Trans I.M.M., vol. 21, 1911–12
‘Persistence of ore in depth’ Trans I.M.M., vol. 24, 1914–15
‘The later Argonauts‘ Trans I.M.M., vol. 36, 1926-7
‘Copper mining in Cyprus’ Trans I.M.M., vol. 39, 1929–30
‘Gold and silver as money metal’ Trans I.M.M., vol. 41, 1931-32
Man and Metals (1932)
A History of American Mining. New York & London: McGraw-Hill (1932)
‘The primitive smelting of copper and bronze’ Trans I.M.M., vol. 44, 1934–35
‘The primitive use of gold’ Trans I.M.M., vol. 44, 1934–35
Retrospect (1937) - his autobiography
"Indian Participation in the Gold Discoveries." British Columbia Historical Quarterly 2:1 (1938): 3-18
The Romance of Mining. Toronto: Macmillan (1944)
Historic Backgrounds of British Columbia. Vancouver: Wrigley Printing (1948)
Autumn Leaves. Vancouver: Wrigley Printing (1948)
References
1864 births
1953 deaths
Mining engineers
People from Crotone
British expatriates in Italy
British expatriates in the United States | Thomas Arthur Rickard | [
"Engineering"
] | 920 | [
"Mining engineering",
"Mining engineers"
] |
63,355,403 | https://en.wikipedia.org/wiki/Prothioconazole | Prothioconazole is a synthetic chemical produced primarily for its fungicidal properties. It is a member of the class of compounds triazoles, and possesses a unique toxophore in this class of fungicides. Its effective fungicidal properties can be attributed to its ability to inhibit CYP51A1. This enzyme is required to biosynthesize ergosterol, a key component in the cell membrane of fungi.
Prothioconazole was first introduced into the market in 2004 by Bayer CropScience and quickly gained popularity due to its broad spectrum of activity against many fungal diseases of important cereal crops. It is used as a solo product under the trade name Proline, and in various mixtures in many other commercially produced fungicides.
Synthesis
The Grignard derivative of 2-chlorobenzyl chloride is added across the double bond of 1-chlorocyclopropyl-2-chloro-ethan-1-one. The chloride within the chloromethyl group is subsequently substituted by 1,2,4-triazole. Finally, to introduce the thioketone group at position 5 on the 1,2,4-triazole, the compound is first lithiated with n-butyllithium, followed by the addition of sulfur (S8). This synthesis is not enantio-selective, resulting in a racemic mixture.
Chemical properties
Prothioconazole does not dissolve well in water but can be dissolved in acetone, esters and polyethylene glycol.
Photo-degeneration proceeds to completion, with the half life of photo degeneration being 47.7h.
It does not readily undergo hydrolysis, such that a pH of 4 and temperature of 50 °C results in half of the molecules being hydrolyzed after only 120 days. The primary degradation product is prothioconazole-desthio. This product possesses average mobility in the soil and its stability to hydrolysis consequently leads to its persistence in soil under aerobic conditions with total degradation in soil taking around 14.7 days. It is also highly resistant to aqueous photolysis and degradation by both aerobic and anaerobic aquatic organisms.
Toxicology
Classification
Extrapolation of animal studies led to prothioconazole and its metabolites being classified as "Not likely to be Carcinogenic to Humans" by the USEPA. The GHS assessed prothioconazole and deemed it to be very toxic to aquatic life with long lasting effects (H410).
The acceptable daily intake (ADI) for prothioconazole amounts to 0.01 mg/kg body weight per day, whereas the acute reference dose (ARfD) was determined to be 0.01 mg/kg bw per day.
Toxicity
Experiments were conducted on animals where the primary route of uptake was oral administration. Coupling the compound to a radioactive label revealed enterohepatic circulation of the compound. At the LOAEL, prothioconazole and its metabolites target the liver, kidneys and the bladder. The lethal dose (LD50) is 6200 mg/kg bw in rats. The dermal LD50 amounted to more than 2000 mg/kg bw, whereas a 4-hour inhalation LC50 was determined to be over 4.9 mg/L. Short term studies assessed adverse hepatic effects, an increase in liver weight, increased activity of liver enzymes and microscopic lesions. Prothioconazole was reported to be irritating to rabbit eyes but not skin. Studies have shown that elimination via the feces is the main route of excretion with over 70% excreted within 24 hours. The half-life of elimination was deduced to be 44.3 hours.
Metabolism in animals
The biotransformation of prothioconazole proceeds by either desulfuration or oxidative hydroxylation of the phenyl group and subsequent conjugation with glucuronic acid. The major metabolites maintain the triazolinthione moiety in all species investigated. The major metabolite was prothioconazole-S-glucuronide, which results from phase II reactions. A linear dose-response relationship was observed for prothioconazole-desthio residues in liver and kidney at different feeding levels.
Metabolism in plants
Prothioconazole-desthio is the major metabolite found in all plant species investigated. Prothioconazole-desthio and prothioconazole share similar toxicological properties. Studies suggest that the plant takes up 1,2,4-triazole from the soil and directly metabolizes it, as the presence of free 1,2,4-triazole was undetectable.
Biochemical properties
Interactions
The primary mechanism of fungicidal action involves the inhibition of CYP51, a crucial component in the demethylation process of lanosterol or 24-methyl dihydroano-sterol at position 14. Disruption of this process results in the impaired biosynthesis mechanism of ergosterol. Ergosterol is a precursor for vitamin D2, which is essential for the structure of the cell membrane in many fungal species.
Studies also suggest that prothioconazole can also interact with and temporarily suppress thyroid peroxidase. This enzyme is responsible for iodine (I2) formation from iodide (I−). Inhibition of this process results in decreased production of thyroid hormones in humans, such as thyroxine or triiodothyronine.
References
Fungicides
Triazoles
Cyclopropanes
2-Chlorophenyl compounds
Thioureas
Tertiary alcohols | Prothioconazole | [
"Biology"
] | 1,204 | [
"Fungicides",
"Biocides"
] |
63,355,855 | https://en.wikipedia.org/wiki/Cytokeratin%205/6%20antibodies | Cytokeratin 5/6 antibodies are antibodies that target both cytokeratin 5 and cytokeratin 6. These are used in immunohistochemistry, often called CK 5/6 staining, including the following applications:
Identifying basal cells or myoepithelial cells in the breast and prostate.
For breast pathology, also in distinguishing usual ductal hyperplasia (UDH) and papillary lesions (having a mosaic-like pattern) from ductal carcinoma in situ, which is usually negative. Cyclin D1 and CK5/6 staining could be used in concert to distinguish between the diagnosis of papilloma (Cyclin D1 < 4.20%, CK 5/6 positive) or papillary carcinoma (Cyclin D1 > 37.00%, CK 5/6 negative).
In the lung, distinguishing epithelioid mesothelioma (CK5/6 positive in 83%) from lung adenocarcinoma (CK5/6 negative in 85%).
Until recently the diagnostic method predominantly depended on identifying antibodies' responses that are positive for adenocarcinoma and negative for mesothelioma.
Cytokeratin 5/6 (CK5/6) is a biomarker that has emerged as a valuable tool in distinguishing epithelioid pleural mesothelioma from metastatic adenocarcinoma. In a study comparing its effectiveness with other markers, CK5/6 showed high sensitivity, staining positively in 92% of epithelioid pleural mesothelioma cases. In contrast, only 14% of metastatic adenocarcinomas were positive for CK5/6. Cytokeratin 5/6 also stains reactive mesothelium, which limits its specificity. Overall, CK5/6, along with other markers like calretinin and thrombomodulin, demonstrates high sensitivity for epithelioid mesothelioma, making it a valuable tool in diagnostic pathology.
References
Antibodies
Biochemistry
Immunohistochemistry | Cytokeratin 5/6 antibodies | [
"Chemistry",
"Biology"
] | 441 | [
"Biochemistry",
"nan"
] |
63,357,147 | https://en.wikipedia.org/wiki/J%C3%BCrgen%20Meyer-ter-Vehn | Jürgen Meyer-ter-Vehn (born 16 February 1940 in Berlin, Germany) is a German theoretical physicist who specializes in laser-plasma interactions at the Max Planck Institute for Quantum Optics. He published under the name Meyer until 1973.
Meyer-ter-Vehn's work involved examining the physical principles of inertial fusion with lasers and heavy ion beams. In the 2000s, he dealt with relativistic laser-plasma interaction (where, for example, due to the relativistic increase in mass, new effects occur such as induced transparency and self-focusing with channel formation) and with the formation of plasma blocks by ultra-short terawatt laser pulses for laser fusion (fast ignition). He also further developed the concept of the wakefield accelerators for generating extremely high electric fields by laser-induced charge separation in plasma by John M. Dawson (a possible accelerator concept).
Life
From 1959, Meyer-ter-Vehn studied physics at the University of Münster and the Ludwig Maximilian University of Munich as a scholarship holder of the German National Academic Foundation, where he obtained his diploma in 1966. In 1969, he received his doctorate in theoretical nuclear physics from the Technical University of Munich. He researched at the Technical University of Munich, the Lawrence Berkeley National Laboratory, the Paul Scherrer Institute and the Jülich Research Center. In 1976, he habilitated at the Technical University of Munich, where he has been an associate professor since 1997. From 1979, he was in the laser research group of the Max Planck Institute for Plasma Physics in Munich, from which the Max Planck Institute for Quantum Optics emerged in 1981. Until 2005, he was group leader for laser plasma theory.
Until the end of the 1970s, he mainly dealt with theoretical nuclear physics.
He was married to Helga Meyer-ter-Vehn (died 2011) and has two sons, Tobias Meyer-ter-Vehn and Moritz Meyer-ter-Vehn, and four grand-daughters, Rebekka, Lili, Clara, and Sophie.
Honors and awards
In 1997, Meyer-ter-Vehn received the American Nuclear Society's Edward Teller Award. In 2009, he received the Hannes Alfvén Prize from the European Physical Society for "his seminal theoretical work in the fields of inertial confinement fusion (ICF), relativistic laser–plasma interaction and laser wakefield electron acceleration".
Books
References
1940 births
Living people
20th-century German physicists
Plasma physicists
University of Münster alumni
Ludwig Maximilian University of Munich alumni
Technical University of Munich alumni | Jürgen Meyer-ter-Vehn | [
"Physics"
] | 516 | [
"Plasma physicists",
"Plasma physics"
] |
63,359,699 | https://en.wikipedia.org/wiki/Edwin%20Vedejs | Edwin Vedejs () (; January 31, 1941 – December 2, 2017) was a Latvian-American professor of chemistry. In 1967, he joined the organic chemistry faculty at University of Wisconsin. He rose through the ranks during his 32 years at Wisconsin being named Helfaer Professor (1991–1996) and Robert M. Bock Professor (1997–1998). In 1999, he moved to the University of Michigan and served as the Moses Gomberg Collegiate Professor of Chemistry for the final 13 years of his tenure. He was elected a fellow of the American Chemical Society in 2011. After his retirement in 2011, the University of Michigan established the Edwin Vedejs Collegiate Professor of Chemistry Chair. Vedejs died on December 2, 2017, in Madison, Wisconsin.
Early life and education
Edwin "Ed" Vedejs was born in Riga, Latvia to Velta (nee Robežnieks) and Nikolajs Vedējs. Not long after his birth, the German occupation of Latvia during World War II occurred followed by the Soviet re-occupation of Latvia in 1944. These events forced his family to settle in the Fischbach Displaced Persons camp in Germany for six years. In 1950, they emigrated to the United States and first settled in Fort Atkinson, WI. They eventually moved to Grand Rapids, MI.
He attended Grand Rapids Junior College for a few years before transferring to the University of Michigan where he received a BS degree in 1962. He moved to the University of Wisconsin and joined the group of Professor for his Ph.D. studies (Progress toward the total synthesis of terramycin), which he completed in 1966. From 1966–67, he did post-doctoral research on the total synthesis of prostaglandins at Harvard University in the laboratory of Nobel Laureate Professor E. J. Corey.
Research
Vedejs' main areas of research focus included organic synthesis methodologies and reaction mechanisms. His group targeted the synthesis of several natural products, such as retronecine, mitomycin, and cytochalasin, but the completion of a total synthesis was always secondary to the main goal of exploring new methodologies. His mechanistic research of the Wittig reaction revealed the importance of the oxaphosphetane. The application of heteroatoms such as nitrogen, sulfur, phosphorus, boron, silicon and tin were often prominently featured, which has been summarized in his self-penned account of his work. Vedejs also tackled a wide range of methodologies aimed at stereoselective synthesis including protonation of carbanions, acylation and alkylation of achiral and prochiral nucleophiles, parallel kinetic resolution, and control of configuration by crystallization-induced asymmetric transformation.
Over the course of his career, Vedejs published over 230 peer-reviewed articles. He served as an associate editor of the Journal of the American Chemical Society from 1994 to 1999, as chair of the NIH Medicinal Chemistry Study Section from 1990 to 1991, as chair of the Organic Division of the American Chemical Society in 2003, and as a member of the Organic Syntheses Board of Editors from 1980 to 1988. He served as editor (along with Scott E. Denmark) of the three volume series Lewis Base Catalysis in Organic Synthesis. Over the course of his 45 years in academia, he mentored over 80 doctoral students, and numerous post-doctoral fellows and undergraduates.
Awards and honors
Alfred P. Sloan Research Fellowship, 1971–1973
Alexander von Humboldt Senior Scientist Award, 1984
Member of the Latvian Academy of Sciences, 1992
Paul Walden Medal, 1997
Herbert C. Brown Award for Creative Research in Synthetic Methods, 2004
Grand Medal of the Latvian Academy of Sciences, 2005
Order of the Three Stars, Republic of Latvia, 2006
Elected fellow of the American Chemical Society, 2011
Selected publications
References
External links
Edwin Vedejs biography from Scripps Research
Total Synthesis of Zygosporin E (Vedejs) from University of Wisconsin, Prof. Reich, Total Syntheses
Total Synthesis of Retronecine (Vedejs) from University of Wisconsin, Prof. Reich, Total Syntheses
Report of Faculty Retirement, Edwin Vedejs, Ph.D. from University of Michigan
Edwin Vedejs from the University of Michigan Faculty History Project
1941 births
2017 deaths
Latvian emigrants to the United States
Organic chemists
Fellows of the American Chemical Society
University of Michigan alumni
University of Wisconsin–Madison alumni
University of Wisconsin–Madison faculty
Scientists from Riga
Latvian World War II refugees
University of Michigan faculty | Edwin Vedejs | [
"Chemistry"
] | 920 | [
"Organic chemists"
] |
63,361,034 | https://en.wikipedia.org/wiki/Power%20in%20Numbers%3A%20The%20Rebel%20Women%20of%20Mathematics | Power in Numbers: The Rebel Women of Mathematics is a book on women in mathematics, by Talithia Williams. It was published in 2018 by Race Point Publishing.
Topics and related works
This book is a collection of biographies of 27 women mathematicians, and brief sketches of the lives of many others. It is similar to previous works including Osen's Women in Mathematics (1974), Perl's Math Equals (1978), Henrion's Women in Mathematics (1997), Murray's Women Becoming Mathematicians (2000), Complexities: Women in Mathematics (2005), Green and LaDuke's Pioneering Women in American Mathematics (2009), and Swaby's Headstrong (2015).
The book is divided into three sections. The first two cover mathematics before and after World War II, when women's mathematical contributions to codebreaking and other aspects of the war effort became crucial;
together they include the biographies of 11 mathematicians. The final section, on modern (post-1965) mathematics has another 16. Mathematics is interpreted in a broad sense, including people who trained as mathematicians and worked in industry, or who made mathematical contributions in other fields. It includes people from more diverse backgrounds than previous such collections, including 18th-century Chinese astronomer Wang Zhenyi, Native American engineer Mary G. Ross, African-American rocket scientist Annie Easley, Iranian mathematician Maryam Mirzakhani, and Mexican-American mathematician Pamela E. Harris.
Mathematicians
The mathematicians discussed in this book include:
Part I: The Pioneers
Marie Crous
Émilie du Châtelet
Maria Gaetana Agnesi
Philippa Fawcett
Isabel Maddison
Grace Chisholm Young
Wang Zhenyi
Sophie Germain
Winifred Edgerton Merrill
Sofya Kovalevskaya
Emmy Noether
Euphemia Haynes
Part II: From Code Breaking to Rocket Science
Grace Hopper
Mary G. Ross
Dorothy Vaughan
Katherine Johnson
Mary Jackson
Shakuntala Devi
Annie Easley
Margaret Hamilton
Part III: Modern Math Mavens
Sylvia Bozeman
Eugenia Cheng
Carla Cotwright-Williams
Pamela E. Harris
Maryam Mirzakhani
Ami Radunskaya
Daina Taimiņa
Tatiana Toro
Chelsea Walton
Sara Zahedi
Audience and reception
The book is aimed at a young audience, with many images and few mathematical details. Nevertheless, each biography is accompanied by a general-audience introduction to the subject's mathematical work, and beyond images of the women profiled, the book includes many mathematical illustrations and historical images that bring to life these contributions. Reviewer Emille Davie Lawrence suggests that the book could also find its way to the coffee tables of professional mathematicians, and spark conversations with guests.
Reviewer Amy Ackerberg-Hastings criticizes the book for overlooking much scholarly work on the subject of women in mathematics, for its lack of detail for some notable women including Émilie du Châtelet and Maria Gaetana Agnesi, and for omitting others such as Mary Somerville. Nevertheless, she recommends it as a "gift book for middle schoolers", as a way of motivating them to work in STEM fields.
Reviewer Allan Stenger notes with approval the book's inclusion of information about how each subject became interested in mathematics, and despite catching some minor errors calls it "a good bet for inspiring bright young women to have an interest in math". Similarly, reviewer Angela Mihai writes that it "will educate and encourage many aspiring mathematicians".
References
Women in mathematics
Biographies and autobiographies of mathematicians
2018 non-fiction books | Power in Numbers: The Rebel Women of Mathematics | [
"Technology"
] | 711 | [
"Women in science and technology",
"Women in mathematics"
] |
63,362,595 | https://en.wikipedia.org/wiki/Anthony%20Milner%20Lane | Anthony Milner Lane (1928–2011) was a leading theoretical nuclear physicist who had a career in the Theoretical Physics Division at the Atomic Energy and Research Establishment (AERE) at Harwell.
He was elected Fellow of the Royal Society in 1975.
References
1928 births
2011 deaths
Fellows of the Royal Society
Nuclear physicists
English nuclear physicists | Anthony Milner Lane | [
"Physics"
] | 70 | [
"Nuclear physicists",
"Nuclear physics"
] |
63,363,213 | https://en.wikipedia.org/wiki/Hiptmair%E2%80%93Xu%20preconditioner | In mathematics, Hiptmair–Xu (HX) preconditioners are preconditioners for solving and problems based on the auxiliary space preconditioning framework. An important ingredient in the derivation of HX preconditioners in two and three dimensions is the so-called regular decomposition, which decomposes a Sobolev space function into a component of higher regularity and a scalar or vector potential. The key to the success of HX preconditioners is the discrete version of this decomposition, which is also known as HX decomposition. The discrete decomposition decomposes a discrete Sobolev space function into a discrete component of higher regularity, a discrete scale or vector potential, and a high-frequency component.
HX preconditioners have been used for accelerating a wide variety of solution techniques, thanks to their highly scalable parallel implementations, and are known as AMS and ADS precondition. HX preconditioner was identified by the U.S. Department of Energy as one of the top ten breakthroughs in computational science in recent years. Researchers from Sandia, Los Alamos, and Lawrence Livermore National Labs use this algorithm for modeling fusion with magnetohydrodynamic equations. Moreover, this approach will also be instrumental in developing optimal iterative methods in structural mechanics, electrodynamics, and modeling of complex flows.
HX preconditioner for
Consider the following problem: Find such that
with .
The corresponding matrix form is
The HX preconditioner for problem is defined as
where is a smoother (e.g., Jacobi smoother, Gauss–Seidel smoother), is the canonical interpolation operator for space, is the matrix representation of discrete vector Laplacian defined on , is the discrete gradient operator, and is the matrix representation of the discrete scalar Laplacian defined on . Based on auxiliary space preconditioning framework, one can show that
where denotes the condition number of matrix .
In practice, inverting and might be expensive, especially for large scale problems. Therefore, we can replace their inversion by spectrally equivalent approximations, and , respectively. And the HX preconditioner for becomes
HX Preconditioner for
Consider the following problem: Find
with .
The corresponding matrix form is
The HX preconditioner for problem is defined as
where is a smoother (e.g., Jacobi smoother, Gauss–Seidel smoother), is the canonical interpolation operator for space, is the matrix representation of discrete vector Laplacian defined on , and is the discrete curl operator.
Based on the auxiliary space preconditioning framework, one can show that
For in the definition of , we can replace it by the HX preconditioner for problem, e.g., , since they are spectrally equivalent. Moreover, inverting might be expensive and we can replace it by a spectrally equivalent approximations . These leads to the following practical HX preconditioner for problem,
Derivation
The derivation of HX preconditioners is based on the discrete regular decompositions for and , for the completeness, let us briefly recall them.
Theorem:[Discrete regular decomposition for ]
Let be a simply connected bounded domain. For any function , there exists a vector, , , such that
and
Theorem:[Discrete regular decomposition for ]
Let be a simply connected bounded domain. For any function , there exists a vector
,
such that
and
Based on the above discrete regular decompositions, together with the auxiliary space preconditioning framework, we can derive the HX preconditioners for and problems as shown before.
References
Polynomials | Hiptmair–Xu preconditioner | [
"Mathematics"
] | 758 | [
"Polynomials",
"Algebra"
] |
63,363,283 | https://en.wikipedia.org/wiki/Durward%20William%20John%20Cruickshank | Durward William John Cruickshank (7 March 1924 – 13 July 2007), often known as D. W. J. Cruickshank, was a British crystallographer whose work transformed the precision of determining molecular structures from X-ray crystal structure analysis. He developed the theoretical framework for anisotropic displacement parameters, also known as the thermal ellipsoid, for crystal structure determination in a series of papers published in 1956 in Acta Crystallographica.
Early life and education
Cruickshank was born in London on 7 March 1924, the son of William Durward Cruickshank and his wife Margaret Ombler Meek, both of whom were doctors. He was educated at St Lawrence College in Ramsgate, Kent. He studied engineering at Loughborough College (which became Loughborough University in 1966), receiving an external degree with first class honours from the University of London in 1944.
From 1944 to 1946 he worked for the Admiralty in the Special Operations Executive (SOE) on naval operational research, including on underwater submersibles.
Cruickshank subsequently studied mathematics at St John's College, Cambridge, graduating with a first-class BA in 1949, an MA in 1954 and finally a ScD in 1961. He received a PhD from the University of Leeds in 1952.
Academic career
Cruickshank joined Gordon (later Sir Gordon) Cox's group at the University of Leeds as a temporary research assistant and where he was appointed Lecturer in Mathematical Chemistry in 1950 and promoted to Reader in 1957. From 1962 to 1967 he was the first Joseph Black Professor of Chemistry at the University of Glasgow.
In 1967 Cruickshank moved to Manchester, becoming Professor of Theoretical Chemistry at University of Manchester Institute of Science and Technology (UMIST) where he remained until his retirement as Emeritus Professor in 1983. He was Deputy Principal there from 1971 to 1972. UMIST became part of the University of Manchester in 2004.
He kept doing research after his retirement, publishing his last paper in 2007, the year he died.
Honours and awards
Cruickshank was elected a Fellow of the Royal Society (FRS) in 1979. In 1991, he received the Dorothy Hodgkin Prize of the British Crystallographic Association, where he served as Vice President from 1983 to 1985.
Cruickshank was awarded the honorary degree of DSc by the University of Glasgow in 2004.
Death
Cruickshank died from cancer in Alderley Edge, Cheshire on 13 July 2007 at the age of 83. His wife, Marjorie, predeceased him. He was survived by a son and a daughter.
Archives
Cruickshank's papers are held by the University of Manchester Library.
See also
Timeline of crystallography
References
1924 births
2007 deaths
British crystallographers
Fellows of the Royal Society
People educated at St Lawrence College, Ramsgate
Alumni of St John's College, Cambridge
Alumni of Loughborough University
Alumni of the University of London
Alumni of the University of Leeds
Mathematical chemistry
Academics of the University of Manchester
Academics of the University of Glasgow
Academics of the University of Leeds | Durward William John Cruickshank | [
"Chemistry",
"Mathematics"
] | 640 | [
"Drug discovery",
"Applied mathematics",
"Molecular modelling",
"Mathematical chemistry",
"Theoretical chemistry"
] |
56,391,106 | https://en.wikipedia.org/wiki/Dioxide%20Materials | Dioxide Materials was founded in 2009 in Champaign, Illinois, and is now headquartered in Boca Raton, Florida. Its main business is to develop technology to lower the world's carbon footprint. Dioxide Materials is developing technology to convert carbon dioxide, water and renewable energy into carbon-neutral gasoline (petrol) or jet fuel. Applications include CO2 recycling, sustainable fuels production and reducing curtailment of renewable energy(i.e. renewable energy that could not be used by the grid).
Carbon Dioxide Electrolyzer Technology
Carbon Dioxide electrolyzers are a major part of Dioxide Materials' business. The work started in response to a Department of Energy challenge to find better catalysts for electrochemical reduction of carbon dioxide. At the time the overpotential (i.e. wasted voltage) was too high, and the rate too low for practical applications. Workers at Dioxide Materials theorized that a bifunctional catalyst consisting of a metal and an ionic liquid might lower the overpotential for electrochemical reduction of carbon dioxide. Indeed, it was found that the combination of two catalysts, silver nanoparticles and an ionic liquid solution containing equal volumes of 1-ethyl-3-methylimidazolium tetrafluoroborate (EMIM-BF4) and water, reduced the overpotential for CO2 conversion to carbon monoxide (CO) from about 1 volt to only 0.17 volts. Workers from other laboratories have subsequently reproduced the findings on many metals, and with several ionic liquids. Dioxide Materials has shown that a similar enhancement occurs during alkaline water electrolysis and the hydrocarboxylation of acetylene ("Reppe chemistry").
At this point, there is still some question about how the imidazolium is able to lower the overpotential for the electrochemical reduction of carbon dioxide. The first step in the electrolysis of CO2 is the addition of an electron into the CO2 or a molecular complex containing CO2. The resultant species is labeled "CO2¯" in the figure on the left. It requires at least an electron-volt of energy per molecule to form the species in the absence of the ionic liquid. That electron-volt of energy is largely wasted during the reaction. Rosen at al postulated that a new complex forms in presence of the ionic liquid so that 1 eV of energy is not wasted. The complex allows the reaction to follow the green pathway on the figure on the right. Recent work suggests that the new complex is a zwitterion Other possible pathways (i.e. non-zwitterions) are discussed in Keith et al. Rosen at al. Verdaguer-Casadevall et al. and Shi et al.
Sustainion Membranes
Unfortunately, ionic liquids were found to be too corrosive to be used in practical carbon dioxide electrolyzers. Ionic liquids are strong solvents. They dissolve/corrode the seals, carbon electrodes and other parts in commercial electrolyzers. As a result, they were difficult to be used in practice.
In order to avoid the corrosion, Dioxide Materials switched from ionic liquid catalysts to catalytic anion exchange polymers. A number of polymers were tested and the imidazolium functionalized styrene polymer shown in the figure on the right showed the best performance. The membranes were trade named Sustainion. The use of Sustainion membranes raised the current and lifetime of the CO2 electrolyzer into the commercially useful range. Sustainion membranes have shown conductivities above 100 mS/cm under alkaline conditions at 60 °C, stability for thousands of hours in 1M KOH, and offer a physical mechanical stability that is useful for many different applications. The membranes showed a lifetime over 3000 hours in CO2 electrolyzers at high current densities. More recent research has noted that a cell membrane that has an optimized cathode has the capability of running for up to 158 days at 200 mA/cm2 .
References
Electrochemical engineering
Chemical companies of the United States
Companies based in Boca Raton, Florida
Carbon dioxide
Climate change | Dioxide Materials | [
"Chemistry",
"Engineering"
] | 835 | [
"Chemical engineering",
"Electrochemical engineering",
"Electrochemistry",
"Greenhouse gases",
"Electrical engineering",
"Carbon dioxide"
] |
56,398,129 | https://en.wikipedia.org/wiki/Zhong%20Zhong%20and%20Hua%20Hua | Zhong Zhong (, born 27 November 2017) and Hua Hua (, born 5 December 2017) are a pair of identical crab-eating macaques (also referred to as cynomolgus monkeys) that were created through somatic cell nuclear transfer (SCNT), the same cloning technique that produced Dolly the sheep in 1996. They are the first cloned primates produced by this technique. Unlike previous attempts to clone monkeys, the donated nuclei came from fetal cells, not embryonic cells. The primates were born from two independent surrogate pregnancies at the Institute of Neuroscience of the Chinese Academy of Sciences in Shanghai.
Background
Since scientists produced the first cloned mammal Dolly the sheep in 1996 using the somatic cell nuclear transfer (SCNT) technique, 23 mammalian species have been successfully cloned, including cattle, cats, dogs, horses and rats. Using this technique for primates had never been successful and no pregnancy had lasted more than 80 days. The main difficulty was likely the proper programming of the transferred nuclei to support the growth of the embryo. Tetra (born October 1999), a female rhesus macaque, was created by a team led by Gerald Schatten of the Oregon National Primate Research Center using a different technique, called "embryo splitting". She is the first cloned primate by artificial twinning, which is a much less complex procedure than the DNA transfer used for the creation of Zhong Zhong and Hua Hua.
In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, using the same cloning technique that was used with Zhong Zhong and Hua Hua, and the same gene-editing CRISPR-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made in order to study several medical diseases.
Process
Zhong Zhong and Hua Hua were produced by scientists from the Institute of Neuroscience of the Chinese Academy of Sciences in Shanghai, led by Qiang Sun and Mu-ming Poo. They extracted nuclei from the fibroblasts of an aborted fetal monkey (a crab-eating macaque or Macaca fascicularis) and inserted them into egg cells (ova) that had had their own nuclei removed. The team used two enzymes to erase the epigenetic memory of the transferred nuclei of being somatic cells. This crucial reprogramming step allowed the researchers to overcome the main obstacle that had precluded the successful cloning of primates until now. They then placed 21 of these ova into surrogate mother monkeys, resulting in six pregnancies, two of which produced living animals. The monkeys were named Zhong Zhong and Hua Hua, a reference to Zhonghua (, a Chinese name for China). Although the success rate was still low, the methods could be improved to increase survival rate in the future. By comparison, the Scotland-based team that created Dolly the sheep in 1996 required 277 attempts and produced only one lamb.
The scientists also attempted to clone macaques using nuclei from adult donors, which is much more difficult. They implanted 42 surrogates, resulting in 22 pregnancies, but there were still only two infant macaques, and they died soon after birth.
Implications
According to Mu-ming Poo, the principal significance of this event is that it could be used to create genetically identical monkeys for use in animal experiments. Crab-eating macaques are already an established model organism for studies of atherosclerosis, though Poo chose to emphasize neuroscience, naming Parkinson's disease and Alzheimer's disease when he appeared on the radio news program All Things Considered in January 2018.
The birth of the two cloned primates also raised concerns from bioethicists. Insoo Hyun of Case Western Reserve University questioned whether this meant that human cloning would be next. Poo told All Things Considered that "Technically speaking one can clone human[s] ... But we're not going to do it. There's absolutely no plan to do anything on humans."
See also
List of animals that have been cloned
References
External links
2017 animal births
2017 in biology
2017 in China
Cloned animals
Identical twins
Individual animals in China
Individual monkeys
Macaca
Science and technology in China
Animal duos | Zhong Zhong and Hua Hua | [
"Chemistry",
"Biology"
] | 889 | [
"Cell biology",
"Cloned animals",
"Cloning",
"Molecular biology",
"Biochemistry"
] |
56,399,324 | https://en.wikipedia.org/wiki/Pyruvate%20decarboxylation | Pyruvate decarboxylation or pyruvate oxidation, also known as the link reaction (or oxidative decarboxylation of pyruvate), is the conversion of pyruvate into acetyl-CoA by the enzyme complex pyruvate dehydrogenase complex.
The reaction may be simplified as:
Pyruvate + NAD+ + CoA → Acetyl-CoA + NADH + CO2
Pyruvate oxidation is the step that connects glycolysis and the Krebs cycle. In glycolysis, a single glucose molecule (6 carbons) is split into 2 pyruvates (3 carbons each). Because of this, the link reaction occurs twice for each glucose molecule to produce a total of 2 acetyl-CoA molecules, which can then enter the Krebs cycle.
Energy-generating ions and molecules, such as amino acids and carbohydrates, enter the Krebs cycle as acetyl coenzyme A and oxidize in the cycle. The pyruvate dehydrogenase complex (PDC) catalyzes the decarboxylation of pyruvate, resulting in the synthesis of acetyl-CoA, CO2, and NADH. In eukaryotes, this enzyme complex regulates pyruvate metabolism, and ensures homeostasis of glucose during absorptive and post-absorptive state metabolism. As the Krebs cycle occurs in the mitochondrial matrix, the pyruvate generated during glycolysis in the cytosol is transported across the inner mitochondrial membrane by a pyruvate carrier under aerobic conditions.
References
Biochemical reactions
Biomolecules
Cellular respiration | Pyruvate decarboxylation | [
"Chemistry",
"Biology"
] | 367 | [
"Cellular respiration",
"Natural products",
"Biochemical reactions",
"Organic compounds",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Metabolism",
"Molecular biology"
] |
72,064,812 | https://en.wikipedia.org/wiki/KYE%20Systems | KYE Systems Group, or KYE, an abbreviation of Kung Ying Enterprises (), is a Taiwanese computer peripheral manufacturer that designs and manufactures and markets human interface devices such as mice under their own brand, Genius. The company also manufactures on an OEM basis for companies such as HP and Microsoft. The company was founded in 1983 and has opened offices internationally.
History
KYE Systems was founded in 1983 in Taipei, Taiwan, by James Jwo (born 1959) and Albert Chen. The company originally did not manufacture peripherals but was instead a systems integrator, assembling IBM PC clones for international export, as part of the company's start-up stage. KYE was founded with US$40,000 in capital; Jwo described himself at first having "little money and few connections". In 1984, the company began developing computer peripherals, namely computer mice, for export. In 1985, they introduced their Genius brand of mice, which had become a popular brand in the United States by the late 1980s, according to PC Week. In 1986, they established KYE International Corporation in the United States in Walnut, California, filing its formal articles of incorporation in 1988. In 1990, the KYE International acquired Mouse Systems, a pioneering peripheral manufacturer of Fremont, California. This acquisition expanded KYE's dealer network and allowed them to absorb Mouse Systems' patents on optical mice technology.
KYE continued to expand their international presence in the 1990s, establishing marketing subsidiaries in the United Kingdom in 1991, in Germany in 1993, and in Hong Kong in 1995. Also in 1995, the company opened a large factory in Shenzhen, China, to complement the company's manufacturing of mice in their home factory of Sanchong, Taiwan, which they purchased in 1987. In 1997, the assembly lines in Shenzhen were expanded to produce image scanners, and in 1998, KYE opened up another factory in nearby Dongguan, China.
KYE had their hands in the manufacturing of multimedia products in the early 1990s, producing graphics cards, and sound cards. A subsidiary devoted to publishing multimedia CD-ROMs was opened in 1993, but KYE folded it in 1996.
In 1997, the company introduced the Genius EasyScroll (also sold as the Mouse Systems ProAgio), the first commercially produced mouse with a scroll wheel. The company held the American patents on scroll-wheel technology through to at least 2009.
KYE by 2010 had an OEM roster of Hewlett-Packard (later HP Inc.), Samsung, Acer, Asus, Best Buy, Foxconn, Microsoft, and Logitech. Microsoft was KYE's largest client in 2010, accounting for 30 percent of KYE's output; KYE built some of Microsoft's Xbox controllers and webcams from their factory in Dongguan.
An April 2010 report by the National Labor Committee wrote of sweatshop-like conditions at the Dongguan factory, which had been recruiting 16- to 18-year-old women for summer jobs. According to the NCL's report, the students worked 15-hour shifts, six or seven days a week, and during breaks rested in cramped dormitories. Pay for the students was set to 65¢ per hour, with a 13¢ deduction for cafeteria services. A single line of 20 to 30 employees had to complete 2,000 Microsoft mice in 12 hours, with management raising the production goal after each shift. The NLC report wrote that the factory was crowded, with nearly 1,000 sharing a roughly 11,025 square foot room, and that workers were prohibited from conversing, listening to music, or using the bathroom outside of breaks. Following the report, Microsoft stated that they had begun taking "appropriate remedial measures in regard to any findings of vendor misconduct", in accordance with their code of conduct for vendors. KYE Systems responded that they set their wages commensurate with Chinese labor regulations and called the report "a one-sided story without offering us a chance to explain". Chinese government officials on April 19, 2010, cited KYE with failing to register nearly 326 workers between the ages of 16 and 18 and imposing excessive amounts of overtime—280 collective hours a week, over the allotted 196. Officials forced KYE to rectify the cited complaints within two weeks.
Between 2008 and 2012, KYE's revenue share in computer peripherals dropped from 69 percent in 2008 to 43 percent, while its revenue share in optical imaging and consumer electronics both grew, from 12 percent and 17 percent respectively in 2008 to roughly a quarter each in 2012. In the third quarter of 2013, the company reported an operating income of NT$64 million.
References
External links
1983 establishments in Taiwan
Computer companies of Taiwan
Companies based in Taipei
Computer companies established in 1983
Taiwanese brands
Computer hardware companies
Computer peripheral companies
Electronics companies of Taiwan | KYE Systems | [
"Technology"
] | 992 | [
"Computer hardware companies",
"Computers"
] |
72,074,000 | https://en.wikipedia.org/wiki/African%20Light%20Source | The African Light Source (AfLS) – – is the initiative to build the first Pan-African synchrotron light source. The initiative is currently led – separately – by the African Light Source Foundation and the Africa Synchrotron Initiative (ASI). The aim of this initiative is to establish an advanced synchrotron light source on the African continent, generating intense beams of X-rays, ultraviolet, and infrared light for scientific research and innovation.
Rationale
There are more than 70 synchrotron light sources, including about 30 high and medium energy synchrotrons, scattered globally but Africa is the only continent without any synchrotron light source facility. Likewise, there is a growing need for innovation to address the challenges that impact the lives of many Africans today. Meeting these challenges calls for investment in science, technology and innovation, including large-scale research infrastructure. To help answer this need, the idea for an African light source has been discussed at least since 2000.
The establishment of a synchrotron light source in Africa has significant potential for scientific progress and socioeconomic development. Synchrotron facilities play a vital role in fundamental, applied, and industrial research, driving technological advancements and fostering collaborations across boundaries. By becoming a player in the field of light sources, Africa can contribute to the global scientific endeavor and promote a culture of enlightenment, diversity, and innovation.
African scientists and nations participate in the European Synchrotron Radiation Facility (ESRF) and Sesame light source, respectively. Such participation provides access to the facilities for researchers, and capacity building and training across many aspects of synchrotron operation and technologies. In December 2017, Diamond Light Source, UK, established the Synchrotron Techniques for African Research and Technology (START) with a £3.7 million funded by the UK Research and Innovation for 3 years. START aimed to provide access to African researchers with focus on energy materials and structural biology.
Leaders
African Light Source Foundation
The African Light Source Foundation, along with its partner organisations, is actively working towards the realisation of this project. The foundation has a defined mandate and roadmap that envisions a 10-15 year timeline for the construction of the actual facility. Young scientists and researchers have opportunities to contribute to the project and join the efforts of the African Light Source Foundation.
In November 2015, the First AfLS Conference was held with 98 delegates from 13 African nations at the European Synchrotron Radiation Facility (ESRF), Grenoble, France. The conference led to the Grenoble Resolutions which encapsulate the formation of the AfLS Steering Committee, AfLS Roadmap, and the creation of the AfLS Foundation, registered in South Africa. The AfLS Foundation is chaired by Simon Connell and has received support by Nana Akufo-Addo, Ghana president, who is championing the project. Since the first conference, and as of August 2023, there have been four further conferences. The AfLS Foundation is actively working upon the Conceptual Design Report (CDR) for a light source in Africa.
The African Light Source Foundation is supported by the African Physical Society, the African Astronomical Society, the African Institute of Planetary and Space Sciences, the African Optics and Photonics Society, the African Society for Nanosciences and Nanotechnologies, the Federation of African Societies of Chemistry, the Federation for African Societies of Biochemistry and Molecular Biology, the African Geographical Society, African materials Research Society, the BioStruct-Africa, the Federation of African Immunological Societies, and the Federation of African Medical Physics Organization.
Africa Synchrotron Initiative
In 2018, during the 32nd African Union meeting, in Addis Ababa, the African Union's executive council called on its member states to support a pan-African synchrotron. Subsequently, the committee for Africa Synchrotron Initiative (ASI) was formed in 2019 by the African Academy of Sciences (AAS), chaired by Shaaban Khalil. The African Synchrotron Initiative (ASI) had their first meeting on 20 January 2022.
Challenges
Funding
One of the significant problems with the African Light Source initiative is the need for substantial financial investment. Scientists estimate that around $1 billion is required to establish the synchrotron light source. The ability of African nations to fund the project has been questioned since they struggle to fund national projects, especially considering the economic disparities and competing priorities in African countries.
Infrastructure and Expertise
Building and operating a synchrotron light source require specialised infrastructure and a highly skilled workforce. Africa currently lacks the necessary infrastructure and expertise in accelerator physics and related fields. To meet the infrastructure and expertise requirements for the AfLS, it was suggested by Marcus et al. that African scientists make greater use of existing overseas national light source facilities, dedicated African beamlines or remote access beamtime, similar to the UK STAR program.
Collaboration and Governance
The African Light Source initiative involves multiple organisations, including the AfLS Foundation and the Africa Synchrotron Initiative (ASI). As of June 2023, the two organisations (AfLS foundation and ASI) are not merging their efforts which makes governance a challenge since there are members who are part of the two organisations. Sarah Wild asserted that ensuring effective collaboration and coordination among these organisations, as well as establishing a robust governance structure, can be complex and may pose challenges. However, according to Marcus et al., to ensure effective governance of the African Light Source (AfLS), it is recommended to involve regional and pan-African stakeholders as full members in the governing bodies of national light source facilities. This approach will not only foster the development of governance expertise but also raise awareness of the AfLS within these bodies.
Sustainability and Operational Costs
Once established, operating a synchrotron light source involves substantial ongoing costs, that is estimated at $100 million, for annual running costs for maintenance, electricity, and personnel. Ensuring the long-term sustainability of the facility and securing funding for operational costs can be a recurring challenge.
Prioritisation of Resources
Sarah Wild argues that while the African Light Source initiative has the potential to advance scientific research, it may not be the most pressing priority for African countries. Limited resources could be better utilised to address more immediate and critical challenges, such as healthcare, education, poverty reduction, and infrastructure development.
References
External links
African Light Source foundation
Africa Synchrotron Initiative
Momentum grows for the African Light Source by Prof. Simon Connell, YouTube
Synchrotron radiation facilities
Research institutes in Africa
International research institutes | African Light Source | [
"Materials_science"
] | 1,334 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
72,075,093 | https://en.wikipedia.org/wiki/Time%20in%20El%20Salvador | El Salvador observes Central Standard Time (UTC−6) year-round.
IANA time zone database
In the IANA time zone database, El Salvador is given one zone in the file zone.tab—America/El_Salvador. "SV" refers to the country's ISO 3166-1 alpha-2 country code. Data for El Salvador directly from zone.tab of the IANA time zone database; columns marked with * are the columns from zone.tab itself:
References
External links
Current time in El Salvador at Time.is
Time in El Salvador at TimeAndDate
Time by country
Time in North America
Geography of El Salvador | Time in El Salvador | [
"Physics"
] | 130 | [
"Spacetime",
"Physical quantities",
"Time",
"Time by country"
] |
72,077,532 | https://en.wikipedia.org/wiki/List%20of%20systems%20biology%20modeling%20software | Systems biology relies heavily on building mathematical models to help understand and make predictions of biological processes. Specialized software to assist in building models has been developed since the arrival of the first digital computers. The following list gives the currently supported software applications available to researchers.
The vast majority of modern systems biology modeling software support SBML, which is the de facto standard for exchanging models of biological cellular processes. Some tools also support CellML, a standard used for representing physiological processes. The advantage of using standard formats is that even though a particular software application may eventually become unsupported and even unusable, the models developed by that application can be easily transferred to more modern equivalents. This allows scientific research to be reproducible long after the original publication of the work.
To obtain more information about a particular tool, click on the name of the tool. This will direct you either to a peer-reviewed publication or, in some rare cases, to a dedicated Wikipedia page.
Actively supported open-source software applications
General information
When an entry in the SBML column states "Yes, but only for reactions.", it means that the tool only supports the reaction component of SBML. For example, rules, events, etc. are not supported.
Specialist Tools
The following table lists specialist tools that cannot be grouped with the modeling tools.
Feature Tables
Supported modeling paradigms
Differential equation specific features
File format support and interface type
Advanced features (where applicable)
Other features
Particle-based simulators
Particle based simulators treat each molecule of interest as an individual particle in continuous space, simulating molecular diffusion, molecule-membrane interactions and chemical reactions.
Comparison of particle-based simulators
The following list compares the features for several particle-based simulators. This table is edited from a version that was originally published in the Encyclopedia of Computational Neuroscience. System boundaries codes: R = reflecting, A = absorbing, T = transmitting, P = periodic, and I = interacting. * Algorithm is exact but software produced incorrect results at the time of original table compilation. † These benchmark run times are not comparable with others due to differing levels of detail.
Model calibration software
Model calibration is a key activity when developing systems biology models. This table highlights some of the current model calibration tools available to systems biology modelers. The first table list tools that are SBML compatible.
PEtab is a community standard for specifying model calibration runs.
Legacy open-source software applications
The following list some very early software for modeling biochemical systems that were developed pre-1980s There are listed for historical interest.
The following list shows some of the software modeling applications that were developed in the 1980s and 1990s. There are listed for historical interest.
References
Systems biology
Bioinformatics
Science software
Systems | List of systems biology modeling software | [
"Technology",
"Biology"
] | 557 | [
"Lists of software",
"Computing-related lists",
"Bioinformatics software",
"Bioinformatics",
"Systems biology"
] |
77,809,115 | https://en.wikipedia.org/wiki/Galactic%20Center%20filament | Galactic Center filaments are large radio-emitting filament-shaped structures found in the Galactic Center of the Milky Way. Their cause is unknown. Both vertical and horizontal filaments exist, running vertically (perpendicular to the galactic plane) and horizontally (parallel to the galactic plane) away from the Galactic Center, respectively. Vertical filaments possess strong magnetic fields and emit synchrotron radiation: radiation emitted by particles moved at near-lightspeed through a magnetic field. Although theories have been proposed, the source of these particles is unknown. Horizontal filaments appear to emit thermal radiation, accelerating thermal material in a molecular cloud. They have been proposed to be caused by the outflow from Sagitarius A*, the Milky Way's central black hole, impacting vertical filaments and H II regions of ionized gas around hot stars.
While the vertical filaments can reach 150 light years in length, the horizontal filaments are much shorter, usually around 5 to 10 light years long. A few hundred horizontal filaments exist (figure given ), far fewer than the number of vertical filaments. Vertical filaments were discovered in 1984 by Farhad Yusef-Zadeh, Mark Morris, and Don Chance; horizontal filaments were discovered in 2023 by Yusef-Zadeh, Ian Heywood and collaborators.
Vertical filaments are often found in pairs and clusters, often stacked equally spaced side by side similar to the strings of a harp. , it was unknown why they formed in clusters or in a regularly spaced manner.
Analyses of galaxy rotation curves have also suggested the existence of vertical gravitating filaments of unclear origin at the center of numerous other galaxies, including (but not limited to) NGC 2841, NGC 2998, NGC 3726, NGC 5371, NGC 5585, NGC 5907, UGC 2885, and Messier 109.
History
Galactic Center filaments, specifically vertical filaments, were first discovered in a 1984 publication by Yusef-Zadeh et al.. They were discovered unexpectedly, and initially considered to be possible artifacts, but confirmed after being observed at multiple wavelengths by multiple groups.
Because the earliest filaments detected were all vertical filaments, oriented perpendicular to the galactic plane, early theories suggested that they may have been related to the Milky Way's magnetic field, oriented in the same manner. A number of theories had been proposed by 1996. One proposal at the time suggested the filaments were cosmic strings. This faced several difficulties, including that the lack of observed oscillation of the strings, and the apparent splitting of some of the filaments.
Subsequently, before 2004, weaker filaments were discovered not perpendicular to the galactic plane. These were initially believed to be oriented randomly in respect to it, and at the time presented difficulties for hypotheses relating Galactic Center filaments to the galactic magnetic field. The radiation emitted from vertical filaments is now known to be synchrotron radiation, caused by particles moving at nearly the speed of light through a magnetic field.
A detailed radio image of the Galactic Center by the MeerKAT telescope published in February 2022 led to the discovery of about ten times more filaments than had been previously known, allowing researchers to study the filaments statistically. Horizontal filaments were discovered in a June 2023 publication by Yusef-Zadeh et al.. According to Yusef-Zadeh, they were identified by statistical tests after he happened to notice, looking at images of the filaments, that many seemed to be pointing radially away from the Galactic Center.
References
Further reading
Astronomical radio sources
Milky Way
Astronomical objects discovered in 1984
Astronomical objects discovered in 2023 | Galactic Center filament | [
"Astronomy"
] | 771 | [
"Astronomical events",
"Astronomical radio sources",
"Astronomical objects"
] |
77,809,629 | https://en.wikipedia.org/wiki/Nikolaos%20Mavromatos | Nikolaos Emmanuel Mavromatos (Νικόλαος Εμμανουήλ Μαυρομάτος; born 15 November 1961 in Athens) is a Greek theoretical physicist, specialising in string theory, particle physics, and cosmology. He has an international reputation for his research on quantum spacetimes, uncertainty relations in string theory, and ideas for tests of possible violations of Lorentz invariance and CPT invariance.
Education and career
In 1979 Mavromatos entered the School of Physical Sciences of the National and Kapodistrian University of Athens (abbr. NKUA or UoA), where he graduated in 1983 with a B.Sc. in physics. His bachelor's dissertation was supervised by Christos Nicholas Ktorides. For the academic year 1983–1984 Mavromatos collaborated with Ktorides on extension and completion of original research from the B.Sc. dissertation. During his undergraduate study, Mavromatos learned quantum theory from Fokion T. Hadjioannou and developed collaborative friendships with G. A. Diamandas and B. C. Georgalas. From 1984 to 1987 Mavromatos studied theoretical particle physics at the University of Oxford. His doctoral dissertation Aspects of the low energy limit of string theories was supervised by Christopher Llewellyn Smith and M. Daniel. From 1985 to 1987 Mavromatos was supported by a Domus Graduate Scholarship for study at Linacre College, Oxford. From 1987 to 1990 he was a Junior Research Fellow of Hertford College, Oxford. At CERN he was from October 1990 to December 1992 a Theory Division Research Associate and from January 1993 to 1995 a Scientific Associate. From 1995 to 1999 he held a junior faculty post as an Advanced Research Fellow of the Particle Physics and Astronomy Research Council in the physics department of the University of Oxford. Since 1999 Mavromatos has held tenure as a professor of theoretical physics at King's College London. He has been on academic leave several times for visiting professorships in Spain and in Greece. He has been an invited speaker and has chaired sessions in many international conferences. Since 2005 he has advised the Greek Government for cooperation of Greece with CERN.
Research
Mavromatos is the author or coauthor of more than 260 scientific articles. His research in cosmology and theoretical particle physics, includes astroparticle physics, exotic quantum phases, and string theory He is a pioneer of exploring in mathematical physics the properties of quantum spacetimes and proposing tests of Lorentz invariance by using intense extragalactic light sources to confirm, or disconfirm, hypotheses about quantum spacetimes. He and his collaborators used string models for mathematical developments of how quantum gravity might modify the optical properties of the quantum vacuum. Mavromatos is credited as the originator of the idea that time in non-critical string theory might result from violation of conformal symmetries found in string theory. He mathematically demonstrated that such hypothetical violations of conformal symmetries might be linked to spacetime defects involving theories of quantum gravity. He used string-theoretical uncertainty relations to show that hypothetical Lorentz-invariance violation (LIV) associated with violation of conformal symmetries might imply thst LIV could be detected using photons emitted from gamma ray bursts or active galactic nuclei. The LIV would be indicated by delays involving variations of photon velocity depending upon the energy of the photons. Photon propagation in the quantum vacuum might have properties analogous to ocean wave propagation in rough seas. Mavromatos and his coworkers have placed limits on LIV and suggested tests of the quantum universe using astrophysical data. The 1998 paper Tests of quantum gravity from observations of γ-ray bursts by Giovanni Amelino-Camelia, John Ellis, Nikolaos Mavromatos, Dimitri Nanopoulos, and Subir Sarkar
has more 1700 citations. With John Ellis, Dimitri Nanopoulos, and other collaborators, Mavromatos suggested possible tests of the constancy of the velocity of light and possible optical properties of the vacuum related to D-branes. Mavromatos coauthored an experimental paper with the MAGIC Collaboration and a highly cited experimental paper with the CPLEAR Collaboration In the 2020s, he with collaborators, using LIGO, embarked on studies of possible modified dispersion relations of photons and gravitons.
Awards and honours
For essays written with John Ellis and Dimitri Nanopoulos and based on quantum gravity research, Nikolaos Mavromatos shared the first prize for the 1999 and 2005 sssay competitions of the Gravity Research Foundation. The 1999 and 2005 prizes were for essays on phenomenology of quantum gravity and string cosmology, respectively. Mavromatos was elected in April 2004 a Fellow of the Institute of Physics. In 2023 he was awarded the John William Strutt, Lord Rayleigh Medal and Prize of the Institute of Physics.
Selected publications
Book chapters
Journal articles
2006
References
1961 births
Living people
People associated with CERN
Fellows of the Institute of Physics
Greek emigrants to the United Kingdom
20th-century Greek physicists
21st-century Greek physicists
National and Kapodistrian University of Athens alumni
Alumni of the University of Oxford
Academics of the University of Oxford
Academics of King's College London
Scientists from Athens
String theorists
Theoretical physicists | Nikolaos Mavromatos | [
"Physics"
] | 1,093 | [
"Theoretical physics",
"Theoretical physicists"
] |
77,814,098 | https://en.wikipedia.org/wiki/Homi%20Bhabha%20Medal%20and%20Prize | The Homi Bhabha Medal and Prize is awarded every two years, jointly by the International Union of Pure and Applied Physics (IUPAP) and the Tata Institute of Fundamental Research (TIFR). The award, established in 2010 in honor of Homi J. Bhabha, consists of a certificate, a medal, an award of 250,000 Indian rupees, and an invitation to visit and to give public lectures at the TIFR in Mumbai and the Cosmic Ray Laboratory in Ooty. The award ceremony take place at the biennial International Cosmic Ray Conference (ICRC). The recipient is "an active scientist who has made distinguished contributions in the field of high-energy cosmic-ray physics and astroparticle physics over an extended academic career." The inaugural award was made in 2011 to Sir Arnold Wolfendale.
There are several different awards named in honor of the physicist Homi J. Bhabha — for example, the Homi Bhabha Medal (in five different categories) awarded by the Nuclear Fuel Complex of the Department of Atomic Energy of the Government of India.
Recipients
See also
List of physics awards
References
Physics awards
Indian science and technology awards
Awards established in 2010
Biennial events
IUPAP | Homi Bhabha Medal and Prize | [
"Technology"
] | 250 | [
"Science and technology awards",
"Physics awards"
] |
77,816,788 | https://en.wikipedia.org/wiki/Potassium%20tetrachloroiodate%28III%29 | Potassium tetrachloroiodate(III) is a coordination compound with the chemical formula KICl4. Its monohydrate crystal structure belongs to the monoclinic system and has the space group P21/n, yellow crystals.
Synthesis
Potassium tetrachloroiodate(III) can be obtained by reacting iodine, potassium chlorate, and 6 mol/L hydrochloric acid solution containing 1.5 mol/L potassium chloride, condensing, filtering, and vacuum drying. The reaction is:
References
Potassium compounds
Iodine compounds
Chlorine(I) compounds | Potassium tetrachloroiodate(III) | [
"Chemistry"
] | 128 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
77,818,183 | https://en.wikipedia.org/wiki/Bezuclastinib | Bezuclastinib is an investigational new drug that is being evaluated for the treatment of solid tumors and systemic mastocytosis. It acts as an inhibitor of KIT (a specific type of receptor tyrosine kinase).
References
Receptor tyrosine kinase inhibitors
Pyrazoles
Pyrrolopyridines | Bezuclastinib | [
"Chemistry"
] | 68 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
67,678,079 | https://en.wikipedia.org/wiki/Landau%20kinetic%20equation | The Landau kinetic equation is a transport equation of weakly coupled charged particles performing Coulomb collisions in a plasma.
The equation was derived by Lev Landau in 1936 as an alternative to the Boltzmann equation in the case of Coulomb interaction. When used with the Vlasov equation, the equation yields the time evolution for collisional plasma, hence it is considered a staple kinetic model in the theory of collisional plasma.
Overview
Definition
Let be a one-particle Distribution function. The equation reads:
The right-hand side of the equation is known as the Landau collision integral (in parallel to the Boltzmann collision integral).
is obtained by integrating over the intermolecular potential :
For many intermolecular potentials (most notably power laws where ), the expression for diverges. Landau's solution to this problem is to introduce Cutoffs at small and large angles.
Uses
The equation is used primarily in Statistical mechanics and Particle physics to model plasma. As such, it has been used to model and study Plasma in thermonuclear reactors. It has also seen use in modeling of Active matter .
The equation and its properties have been studied in depth by Alexander Bobylev.
Derivations
The first derivation was given in Landau's original paper. The rough idea for the derivation:
Assuming a spatially homogenous gas of point particles with unit mass described by , one may define a corrected potential for Coulomb interactions, , where is the Coulomb potential, , and is the Debye radius. The potential is then plugged it into the Boltzmann collision integral (the collision term of the Boltzmann equation) and solved for the main asymptotic term in the limit .
In 1946, the first formal derivation of the equation from the BBGKY hierarchy was published by Nikolay Bogolyubov.
The Fokker-Planck-Landau equation
In 1957, the equation was derived independently by Marshall Rosenbluth. Solving the Fokker–Planck equation under an inverse-square force, one may obtain:
where are the Rosenbluth potentials:
for
The Fokker-Planck representation of the equation is primarily used for its convenience in numerical calculations.
The relativistic Landau kinetic equation
A relativistic version of the equation was published in 1956 by Gersh Budker and Spartak Belyaev.
Considering relativistic particles with momentum and energy , the equation reads:
where the kernel is given by such that:
A relativistic correction to the equation is relevant seeing as particle in hot plasma often reach relativistic speeds.
See also
Boltzmann equation
Vlasov equation
References
Eponymous equations of physics
Plasma physics equations
Lev Landau | Landau kinetic equation | [
"Physics"
] | 552 | [
"Eponymous equations of physics",
"Equations of physics",
"Plasma physics equations"
] |
67,679,430 | https://en.wikipedia.org/wiki/Angola%E2%80%93Benguela%20Front | The Angola - Benguela front (ABF) is a permanent frontal feature situated between 15° and 17°S off the coast of Angola and Namibia, west Africa. It separates the saline, warm and nutrient-poor sea water of the Angola Current from the cold and nutrient-rich sea water associated with the Benguela Current.
In comparison to other major oceanic fronts created by the western boundary currents, the ABF is confined to a relatively narrow band of latitudes and is characterized by strong horizontal gradients in sea surface temperature and salinity. The ABF has a variable morphology, geographic location, and thermal characteristics. It plays an important role for the southern African continent due to its close proximity to the coast, having a significant impact on the local marine ecosystem and regional climate. Variability in position and intensity of the ABF has been suggested to affect local biology and thus fish stocks, as well as rainfall variability.
History
The ABF was first named and described by Janke (1920) based on ship log data. However, consistent research on the front itself has only been conducted since the 1960s. It was Hart and Currie (1960) who first documented the existence of the Angola-Benguela front when the RRS William Scoresby sailed southwards surveying the Benguela Current off the west coast of Africa during the autumn and spring of 1950. They reported a sharp decrease in sea surface temperature from 27 to 20.5 °C within the span of one hour. In the early 1970s research increased steadily in this region with subsequent cruises revealing ocean circulation features of the area like the Angola Dome, and began to document the seasonal cycle of the ABF position.
During the past 20 years the cooperation between Angola, Namibia and other countries in Europe and Africa has been greatly improved through different projects and collaborations like the Enhancing Prediction of Tropical Atlantic Climate and its Impacts (PREFACE) (November 2013 - April 2018) and the Benguela Current Commission (BCC). The objective of these joint research projects has been to investigate and monitor the productivity and oceanographic processes and interactions within the region surrounding the ABF, aiming at improving the management of the fisheries and water resources.
Physical properties
Horizontal and vertical structure
The physical properties of the ABF have been studied by historic hydrographic data, satellite-derived sea surface temperature observations, in situ measurements and by model-based studies. All of the findings are in general agreement that the front is oriented normal to the coast and stretches offshore in a west to north-westerly direction between 15 and 17°S. The front has an average width of about 200 km, but it can be much narrower at certain times, with steeper temperature gradients. The average distance the front penetrates seawards from the coast is 250 km, but traces can be found up to 1000 km offshore. The region of the frontal zone was previously defined by a characteristic temperature gradient of between 1 °C per 28 km and 1 °C per 90 km. A more recent study calculated a meridional sea surface temperature gradient of 1 °C per 34 km (or 3 °C per 100 km) across the ABF in austral summer, whereas Colberg and Reason (2006) estimated ~4 °C per 100 km in the middle of the ABF. The sharpest temperature gradients are found within 250 km of the coast. Multiple sharp fronts can also occur, especially when the Angola Current is strongest in austral summer.
Driving forces controlling the development of the front
There are several assumptions about the most significant processes and driving forces controlling the development of the ABF. Many past studies suggest that the thermal characteristics of the front are influenced by a combination of factors. These include coastal orientation, bathymetry, movements of the South Atlantic Anticyclone, interaction between the south-flowing warm water of the Angola current and the north-flowing cold water of the Benguela current and the associated surface wind stress. However, Meeuwis and Lutjeharms (1990) concluded that the position of the front seems to be almost exclusively due to the opposing flows of the Angola Current and Benguela system. An alternate hypothesis, proposed by Shannon and Nelson (1996), suggests that wind stress is the most important mechanism for the maintenance of the front. Kostianoy and Lutjeharms (1999) found that short term changes in the ABF are correlated to variations of the pressure gradient driven by the South Atlantic Anticyclone.
In order to better understand the sensitivity of the position and intensity of the ABF to atmospheric forcing, Colberg and Reason (2006) were the first to attempt to model the front. They showed that the frontal position may be determined by the confluence of the northward and southward opposing flows, similar to what has been proposed by Meeuwis and Lutjeharms (1990). However, this confluence zone is primarily affected by the overlying atmospheric circulation. The strong anticyclonic wind stress curl of the region determines the motion of the South Equatorial Counter Current which causes the southward flow of the Angola Current. At the same time, the alongshore wind stress further to the south causes coastal upwelling resulting in the northward flow of the Benguela current. In the same study, Colberg and Reason found that the intensity of the ABF is tied to the strength of the meridional wind field which determines the coastal upwelling. However, even though the ABF is influenced by the intensity and location of the trade winds, the effect is not linear.
Seasonal cycle
The ABF is characterized by a typical seasonal cycle with meridional frontal movements and changes in the cross-thermal gradient. Previous studies found that the front is most distinct, widest and has steeper meridional sea surface temperature (SST) gradients in austral summer (summer in the Southern Hemisphere), when it reaches its southernmost position. Whereas in austral winter it is less intense, and it reaches its northernmost position.
The core of the ABF, which is considered as the region of steepest temperature gradients within the frontal zone, remains very steady throughout the year and always lies between 17 and 15°S (mean location 16.4°S). Mean temperatures at the core of the frontal zone are 20.7 °C in austral summer and 18.0 °C in austral winter. The front exists between 15.5 and 17°S in the austral summer with more intense temperature gradients (~1 °C per 34 km), while in the austral winter it lies between 15.5 and 17°S with weaker temperature gradients (1 °C per 40 km). The northern and southern boundaries of the frontal zone appear to fluctuate out of phase: as the northern boundary moves southwards in the austral winter, the southern boundary is displaced northwards. The converse is true for austral summer.
Benguela Niño
Apart from seasonal and mesoscale features, interannual fluctuations of the ABF are also significant and cause great temporal and spatial variability in the frontal zone. Minor warm and cold interannual anomalies have been observed throughout the record and appear to develop regularly in the ABF. However, a particular phenomenon and the most significant interannual signal that can be encountered in the frontal region is the Benguela Niño event. The inverse of a Benguela Niño is called Benguela Niña.
Like the well-known El Niño phenomenon in the South Pacific, these events are characterised by an intense and unusual warming of the surface layer at the coast of Namibia, with positive SST anomalies reaching up to 4 °C. However, Benguela Niño events are less intense and less frequent than Pacific El Niños. They are observed with an interval of 7 to 11 years and are associated with a southward intrusion of warm and saline Angolan water into the northern Benguela. Benguela Niño tend to reach their maximum in late austral summer mainly during March–April. There have been major, well-documented Benguela Niño events in 1934, 1950, 1964, 1974, 1984, 1995, 1999 and 2010.
During a Benguela Niño, the Angola-Benguela front is abnormally displaced to a southern position, causing a reduced upwelling intensity at the coast and the advection of warm, highly saline water as far as 25°S. Two main forcing mechanisms responsible for this interannual variability of the Angola-Benguela frontal zone are considered but are still under debate. These are the local atmospheric forcing and the connection with the equatorial variability. On the one hand, some studies have shown that temperature and upwelling anomalies are caused by local wind changes related to the magnitude and location of South Atlantic Anticyclone. On the other hand, past studies indicated that, rather than being triggered by variation in local wind-stress, the Benguela Niño is associated with large-scale remote changes in the wind patterns. More specifically, remote forcing is caused by a sudden relaxation of the trade winds in the western or central equatorial Atlantic. This generates equatorial Kelvin waves which propagate eastward along the Atlantic equator until the African coast where one part of their energy is reflected back to the west as equatorial Rossby waves. Another part of their energy is transmitted poleward along the west coast of Africa as coastal trapped waves influencing the temperature variability.
See also
Front (oceanography)
Benguela Current
Angola Current
References
Geography of the Southern Ocean
Africa | Angola–Benguela Front | [
"Physics"
] | 1,920 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
67,684,016 | https://en.wikipedia.org/wiki/Concerted%20metalation%20deprotonation | Concerted metalation-deprotonation (CMD) is a mechanistic pathway through which transition-metal catalyzed C–H activation reactions can take place. In a CMD pathway, the C–H bond of the substrate is cleaved and the new C–Metal bond forms through a single transition state. This process does not go through a metal hydride species that is bound to the cleaved hydrogen atom. Instead, a carboxylate or carbonate base deprotonates the substrate. The first proposal of a concerted metalation deprotonation pathway was by S. Winstein and T. G. Traylor in 1955 for the acetolysis of diphenylmercury. It was found to be the lowest energy transition state in a number of computational studies, was experimentally confirmed through NMR experiments, and has been hypothesized to occur in mechanistic studies.
While there are a number of different possible mechanisms for C–H activation, a CMD pathway is common for high valent, late transition metals like PdII, RhIII, IrIII, and RuII. The C–H bonds that have been found to undergo C–H activation through CMD include those that are aryl, alkyl, and alkenyl. Investigations into CMD paved the way for the development of many new C–H functionalization reactions, especially in the areas of direct arylation and alkylation by palladium and ruthenium.
Mechanism
CMD begins with a high valent, late transition metal like PdII that may or may not be bound to a carboxylate anion. In the initial stages, there is usually a coordination of the C–H bond with the metal to form a metal–hydrocarbon sigma complex. The computed transition state involves concerted partial formation of a carbon–metal bond and partial protonation of the carboxylate. At the same time, any anionic metal–carboxylate bond begins to break, as does the carbon–hydrogen bond that is being activated. Compared to other possible processes such as oxidative addition of the C–H bond to the metal, CMD is lower in energy in many cases. A transition state in which the carboxylate is bound to the metal can be referred to as either CMD or AMLA, which stands for "ambiphilic metal–ligand assistance," but the latter emphasizes that the carboxylate acts as a ligand during the transition state.
History
In 1955, S. Winstein and T. G. Traylor published a study of the mechanism of acetolysis of organomercury compounds. They propose a series of possible mechanisms for the process, which they rule out through based on their kinetic data. A concerted metalation deprotonation is considered, and they are unable to rule it out through the data they collect.
The metalation of organic C–H bonds was extended from mercury to palladium in 1968 by J. M. Davidson and C. Triggs who identified that palladium acetate reacts with benzene in perchloric acid and acetic acid to give biphenyl, palladium(0), and 2 equivalents of acetic acid through an organopalladium intermediate. Early mechanistic studies found that palladium acetate was the best palladium precatalyst due to the presence of the acetate ligand. Mechanistic investigation has been ongoing since these initial discoveries, and infrared spectroscopy on the picosecond–millisecond time scale was used in 2021 to observe the states involved in proton transfer from acetic acid to a metalated ligand, which is the microscopic reverse of a concerted metalation deprotonation process.
Examples
Reaction systems that are less efficient or entirely inactive in the absence of carboxylate acids and bases are likely to occur through a concerted metalation protonation reaction pathway. An example of such a reaction with an sp3 C–H bond that was reported in 2007 by Keith Fagnou and coworkers is an intramolecular cyclization that uses a palladium catalyst.
A notable example of a reaction that is catalyzed by ruthenium in which directed metalation occurs through CMD was reported by Igor Larossa and coworkers in 2018. The ruthenium catalyst is functional group tolerant and enables the late stage synthesis of pharmaceutically relevant biaryls.
Importance of carboxylate
Many C–H activation reactions, particularly those involving late transition metals, require carboxylate or carbonate bases. The need for this reaction component often suggests the occurrence of a CMD pathway. However, in order to be classified as CMD, the transition state does not need to involve the carboxylate as a ligand on the metal. Common sources of carboxylate include pivalate, acetate, and benzoate.
References
Organometallic chemistry
Organic chemistry | Concerted metalation deprotonation | [
"Chemistry"
] | 1,030 | [
"Organometallic chemistry",
"nan"
] |
67,684,224 | https://en.wikipedia.org/wiki/Kathy%20Halvorsen | Kathleen E. Halvorsen (born 1961) is an American environmental scientist whose research interests include biofuels, indigenous stewardship, public participation in land use decision-making, and climate change mitigation. She is Associate Vice President for Research Development University Professor Chair of Natural Resource Policy at Michigan Technological University, where she holds a joint appointment in the Department of Social Sciences and the College of Forest Resources and Environmental Science.
Education and career
Halvorsen studied the political economy of natural resources at the University of California, Berkeley, graduating in 1989. After earning a master's degree in environmental science at the State University of New York College of Environmental Science and Forestry in 1992, she completed a Ph.D. in forest resource management in 1996 at the University of Washington.
She joined Michigan Tech in 1995 as an instructor, became a regular-rank faculty member in 1996, and was named University Professor in 2019. She served as the executive director of the International Association for Society and Natural Resources for 2018–2020, and became associate vice president at Michigan Tech in 2019.
Recognition
Michigan Tech gave Halvorsen their annual Research Award in 2014.
References
External links
Home page
1961 births
Living people
American environmental scientists
University of California, Berkeley alumni
State University of New York College of Environmental Science and Forestry alumni
University of Washington alumni
Michigan Technological University faculty | Kathy Halvorsen | [
"Environmental_science"
] | 268 | [
"American environmental scientists",
"Environmental scientists"
] |
61,219,457 | https://en.wikipedia.org/wiki/C18H27N | {{DISPLAYTITLE:C18H27N}}
The molecular formula C18H27N (molar mass: 257.42 g/mol, exact mass: 257.2143 u) may refer to:
3-Methyl-PCP
Butyltolylquinuclidine (BTQ) | C18H27N | [
"Chemistry"
] | 68 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
61,219,476 | https://en.wikipedia.org/wiki/C25H42N7O17P3S | {{DISPLAYTITLE:C25H42N7O17P3S}}
The molecular formula C25H42N7O17P3S (molar mass: 837.62 g/mol) may refer to:
Butyryl-CoA
Isobutyryl-CoA
Molecular formulas | C25H42N7O17P3S | [
"Physics",
"Chemistry"
] | 66 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
61,224,003 | https://en.wikipedia.org/wiki/C13H22O | {{DISPLAYTITLE:C13H22O}}
The molecular formula C13H22O (molar mass: 194.31 g/mol) may refer to:
Solanone
Geranylacetone | C13H22O | [
"Chemistry"
] | 48 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
61,225,569 | https://en.wikipedia.org/wiki/June%20Sutor | Dorothy June Sutor (6 June 1929 – 27 May 1990) was a New Zealand-born crystallographer who spent most of her research career in England. She was one of the first scientists to establish that hydrogen bonds could form to hydrogen atoms bonded to carbon atoms. She later worked in the laboratory of Kathleen Lonsdale on the characterisation and prevention of urinary calculi.
Early life and education
Sutor was born in New Zealand, in the Auckland suburb of Parnell, on 6 June 1929, the daughter of Victor Edward Sutor, a coach builder, and Cecilia Maud Sutor (née Craner). She was educated at St Cuthbert's College, and went on to study chemistry at Auckland University College. She graduated Master of Science with first-class honours in 1952 and, supervised by Frederick Llewellyn, she graduated with her first PhD in 1954. She published her first single-author Acta Crystallographica paper, The unit cell and space group of ethyl nitrolic acid, whilst a student.
In 1954, Sutor went to the United Kingdom, and took up a travelling scholarship and Bathurst Studentship at Newnham College, Cambridge. There, she earned a PhD on the structures of purines and nucleosides in 1958. During her second doctorate, Sutor identified the structure of caffeine, and showed that it can readily recrystallise in its monohydrate form.
Research and career
Sutor moved to Australia in 1958, working as a research officer in Melbourne. In 1959, she returned to Britain to take up an Imperial Chemical Industries Fellowship at Birkbeck College, where she worked with J. D. Bernal, Rosalind Franklin, and Aaron Klug on the application of X-ray crystallography in molecular biology. She worked on hydrogen bonding and computational chemistry, writing programs for the EDSAC. Sutor used the concept of electronegativity, introduced by Linus Pauling in 1932, to explain hydrogen bonds. She investigated the Van der Waals distances that are shortened during hydrogen bonding, and based on her findings proposed that a C–H group that is activated by partial ionization can take part in hydrogen bonding (so called C-H···O bonds). She investigated the structure of theacrine, DNA and other purine compounds. In 1962, Sutor published the first crystallographic evidence for C-H ⋯O bonding. Her work expanded from small-molecule crystal structures to alkaloids.
Her work was criticised by Jerry Donohue, who disputed her Van der Waals distances and claimed that she had data problems. At the time, Donohue's textbooks were in most laboratories, and he was a common reviewer for academic papers including crystal structures. Carl Schwalbe has speculated that this could have been due to academic jealousy, saying in 2019 that "acceptance of women in science, particularly the physical sciences, was by no means complete".
Sutor moved back to New Zealand, working briefly for the Department of Scientific and Industrial Research before taking leave to look after her father, who died in 1964. In 1966, Sutor was offered a job by Kathleen Lonsdale at University College London. She studied urinary calculi and searched for ways to prevent them. Sutor had good contacts with hospital staff, and even managed to secure Napoléon III's bladder stone. She was supported by a grant from the Nuffield Foundation. In 1979, Sutor became partially sighted, and more "interested in the theoretical aspects of stone growth".
Death and legacy
Sutor died of cancer in London on 27 May 1990. She bequeathed her estate of over £500,000 for the establishment of June Sutor Fellowships for research at Moorfields Eye Hospital into the prevention of blindness.
Sutor's predictions on the hydrogen bond were confirmed by Robin Taylor and Olga Kennard in the 1980s. Their work included 113 neutron diffraction patterns in the Cambridge Crystallographic Database, and found that Sutor's C–H⋯O bond distances were correct to within . Gautam Radhakrishna Desiraju dedicated a chapter of his book on hydrogen bonds to the work of Sutor, and Carl Schwalbe compared the structures cited by Sutor to modern redeterminations.
References
1929 births
1990 deaths
People from Auckland
People educated at St Cuthbert's College, Auckland
University of Auckland alumni
Alumni of Newnham College, Cambridge
New Zealand women chemists
Crystallographers
People associated with Department of Scientific and Industrial Research (New Zealand)
People associated with Birkbeck, University of London
People associated with University College London
New Zealand expatriates in England
Deaths from cancer in England | June Sutor | [
"Chemistry",
"Materials_science"
] | 958 | [
"Crystallographers",
"Crystallography"
] |
76,422,924 | https://en.wikipedia.org/wiki/Reformulated%20Blendstock%20for%20Oxygenate%20Blending | Reformulated Blendstock for Oxygenate Blending (RBOB) is a gasoline futures contract traded on the New York Mercantile Exchange (NYMEX). It is the benchmark futures contract for wholesale gasoline in the United States.
History
Edwin Drake was the first to discover RBOB gasoline but discarded it as a byproduct on his quest to refine crude oil into kerosene.
Composition
RBOB gasoline is a blend of hydrocarbons suitable for use in Spark-ignition engines. It typically contains various additives, including oxygenates like ethanol or methyl tertiary butyl ether (MTBE), to improve octane rating and reduce air pollution.
Refining
RBOB is refined from crude oil and about half of the crude oil is refined into RBOB gasoline, therefore RBOB tracks the price of WTI crude closely.
See also
Commodity market
Gasoline
Fuel taxes in the United States
New York Mercantile Exchange
References
Chemical substances
Benchmark crude oils
Petroleum in the United States | Reformulated Blendstock for Oxygenate Blending | [
"Physics",
"Chemistry"
] | 194 | [
"Materials",
"Chemical substances",
"nan",
"Matter"
] |
76,423,933 | https://en.wikipedia.org/wiki/Janzen%E2%80%93Rayleigh%20expansion | In fluid dynamics, Janzen–Rayleigh expansion represents a regular perturbation expansion using the relevant mach number as the small parameter of expansion for the velocity field that possess slight compressibility effects. The expansion was first studied by O. Janzen in 1913 and Lord Rayleigh in 1916.
Steady potential flow
Consider a steady potential flow that is characterized by the velocity potential Then satisfies
where , the sound speed is expressed as a function of the velocity magnitude For a polytropic gas, we can write
where is the specific heat ratio, is the stagnation sound speed (i.e., the sound speed in a gas at rest) and is the stagnation enthalpy. Let be the characteristic velocity scale and is the characteristic value of the sound speed, then the function is of the form
where is the relevant Mach number.
For small Mach numbers, we can introduce the series
Substituting this governing equation and collecting terms of different orders of leads to a set of equations. These are
and so on. Note that is independent of with which the latter quantity appears in the problem for .
Imai–Lamla method
A simple method for finding the particular integral for in two dimensions was devised by Isao Imai and Ernst Lamla. In two dimensions, the problem can be handled using complex analysis by introducing the complex potential formally regarded as the function of and its conjugate ; here is the stream function, defined such that
where is some reference value for the density. The perturbation series of is given by
where is an analytic function since and , being solutions of the Laplace equation, are harmonic functions. The integral for the first-order problem leads to the Imai–Lamla formula
where is the homogeneous solution (an analytic function), that can be used to satisfy necessary boundary conditions. The series for the complex velocity potential is given by
where and
References
Fluid dynamics | Janzen–Rayleigh expansion | [
"Chemistry",
"Engineering"
] | 383 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
76,427,228 | https://en.wikipedia.org/wiki/Pemivibart | Pemivibart, sold under the brand name Pemgarda, is a monoclonal antibody medication authorized for the pre-exposure prophylaxis (prevention) of COVID19. Pemivibart was developed by Invivyd.
The US Food and Drug Administration (FDA) issued an emergency use authorization for pemivibart in March 2024.
Medical uses
In the US, pemivibart is authorized for the pre-exposure prophylaxis (prevention) of COVID19 in people aged twelve years of age and older weighing at least . It is authorized for individuals who are not currently infected with SARSCoV2 and who have not had a known recent exposure to an individual infected with SARSCoV2; and who have moderate-to-severe immune compromise due to a medical condition or due to taking immunosuppressive medications or treatments and are unlikely to mount an adequate immune response to COVID19 vaccination.
In August 2024, the US Food and Drug Administration (FDA) revised the emergency use authorization for pemivibart to limit its use to when the combined national frequency of variants with substantially reduced susceptibility to pemivibart is less than or equal to 90%.
Society and culture
Legal status
The US Food and Drug Administration (FDA) issued an emergency use authorization for pemivibart in March 2024.
Names
Pemivibart is the international nonproprietary name.
References
Antiviral drugs
Experimental monoclonal antibodies
COVID-19 drug development | Pemivibart | [
"Chemistry",
"Biology"
] | 322 | [
"Antiviral drugs",
"COVID-19 drug development",
"Biocides",
"Drug discovery"
] |
59,757,549 | https://en.wikipedia.org/wiki/Luisa%20Cifarelli | Luisa Cifarelli FInstP (born 11 June 1952) is a Professor of Experimental Particle Physics at the University of Bologna. She is the Director of the La Rivista del Nuovo Cimento.
Early life and education
Cifarelli was born in Rome in 1952, daughter of Michele Cifarelli, an Italian politician and magistrate. She studied physics at the University of Bologna and graduated in 1975. She worked as a researcher in at the Istituto Nazionale di Fisica Nucleare and CERN. She edited the collection of scientific studies for the publication QCD at 200 TeV. In 1988 she was made an associate professor at L'Università degli Studi di Napoli Federico II.
Career
Cifarelli was appointed full Professor at the University of Pisa in 1991. She moved to the University of Salerno in 1993. She works at CERN, the Laboratori Nazionali del Gran Sasso, DESY and the Istituto Nazionale di Fisica Nucleare. She has been involved in the design and construction of the ALICE experiment, which studies proton-proton and nucleus-nucleus collisions at extreme energies. She was made Head of the ALICE Data Analysis and Simulation Group in 2000. She served as Deputy Chairperson of the time of flight experiment at ALICE. She used the ALICE experiment to study quark-gluon interactions. She coordinates the Extreme Energy Events experiments, which uses muon detectors in high schools around Italy to study cosmic swarms. She serves on the DarkSide project; a 20 tonne Two-Phase LAr TPC for Direct Dark Matter Detection at the Laboratori Nazionali del Gran Sasso.
In 2008, Cifarelli was the first woman to be made President of the Italian Physical Society in 2008. That year she was also elected as a Fellow of the Institute of Physics. In 2011 she was appointed President of the European Physical Society. Cifarelli was the first woman to be elected president of the European Physical Society. She was made President of the Enrico Fermi Center for Study and Research. She has spoken about the life of Enrico Fermi extensively. Cifarelli has acted as editor of the European Physical Journal. She serves on the editorial board of Elsevier's Nuclear Instruments and Methods in Physics Research.
References
1952 births
Academic staff of the University of Bologna
Living people
21st-century Italian physicists
Particle physicists
People associated with CERN
Italian women physicists
20th-century Italian physicists
Presidents of the European Physical Society
Fellows of the American Physical Society | Luisa Cifarelli | [
"Physics"
] | 512 | [
"Particle physicists",
"Particle physics"
] |
59,760,124 | https://en.wikipedia.org/wiki/MERCON | Mercon represents a series of technical standards for automatic transmission fluid, developed and trademarked by Ford Motor Company. This designation serves as a mark of quality that Ford has established for fluids used in automatic transmissions. The Mercon name, which has evolved into a brand, is licensed by Ford to various manufacturers. These companies are authorized to produce the fluid according to Ford's specifications and market it under their own brand names.
The specifications outlined under the Mercon label cover various aspects such as viscosity, friction characteristics, and thermal stability, which are essential for the transmission fluid to perform under a wide range of operating conditions. This careful regulation ensures that all licensed Mercon fluids provide consistent quality and performance, giving consumers confidence in their use of aftermarket products.
Overview
The original Mercon (M2C185-A) Transmission Fluid was introduced in January 1987. Over the years, the original Mercon was supplanted by Mercon "V", Mercon "SP", Mercon LV, and Mercon ULV, which is the latest automatic transmission fluid. Ford has upgraded the Mercon specifications over the years; the newer fluids are not always backward compatible with previous fluids. Newer 6 and 10-speed transmissions as well as Plug-In Hybrid (PHEV), and Electric Vehicle (EV) transmission technologies require specialized fluids to operate properly. There remains a market for older fluids that claim to meet the earlier fluid specifications. See the details below for the backward compatibility of each fluid.
Originally the name MERCON was associated exclusively with automatic transmission fluids, later Ford released MERCON Gear oils and other lubricants under the MERCON brand. Not all Mercon fluids are licensed for reselling under another brand name. All licensed Mercon fluids must have a license number on the container. If no license number is found, the fluid may not be Ford-approved and the automatic transmission fluid cannot be guaranteed to meet Ford specifications. Ford, like many automobile manufacturers, uses transmissions sourced from other suppliers or transmission manufacturers around the world; these transmissions are not manufactured by Ford. Many of these automatic transmissions use unique fluids that might not be shown on this page.
History
Before Mercon: 1942–1987
1942 Motor Oil
In 1942, The Mercury 8 and Lincoln offered cars with an optional "Liquamatic Drive" using a fluid coupling, conventional clutch, and semi-automatic three-speed transmission. The transmission had an overrunning clutch on the transmission countershaft. The flywheel's fluid coupling used S.A.E 10 motor oil for lubrication. The transmission gearbox used traditional gear oil. This transmission was only produced for a few months before the U.S.A. entered World War II, production of this transmission was not resumed after the war.
1949 GM Hydra-Matic Fluid
In April 1949, Lincoln began offering the General Motors Hydra-Matic 4-speed automatic transmission in their 1950 model year vehicles. This offering continued through the 1954 model year. Lincoln service information calls for "Lincoln Automatic Transmission Fluid". This fluid met the GM Hydra-Matic Drive fluid specifications.
This Fluid was First Used in the Following Transmissions:
1949 Hydra-Matic with an L-9 serial number prefix
1950 Hydra-Matic with an L-50 serial number prefix
1951 Hydra-Matic with an L-51 serial number prefix
1952 Hydra-Matic with an L-52 serial number prefix
1953 Hydra-Matic with an L-53 serial number prefix
1954 Hydra-Matic with an L-54 serial number prefix
1950 GM Type "A" Fluid
Every automatic transmission produced by any vehicle manufacturer (Oldsmobile, Cadillac, Buick, Chevrolet, Pontiac, GMC, Ford, Mercury, Lincoln, Chrysler, Dodge, Desoto, Packard, and Studebaker) used GM Type "A" transmission fluids in their transmissions from 1949 to 1958.
In 1950, 11 years after GM released the Hydra-Matic 4-speed automatic transmission and its special Hydra-Matic Automatic Transmission Fluid, Ford released their first fully automatic transmission; the 1951 Fordomatic 3-speed transmission. This new fully automatic transmission used the GM Type "A" automatic transmission fluid specification. Ford and hundreds of other resellers, became a licensed reseller of the GM Type "A" fluid with an Armor Qualification number. The Type "A" fluid was marketed under the Ford brand name.
This Fluid was First Used in the Following Transmissions:
1951 Fordomatic (Borg-Warner FX) 3-Speed automatic transmission
1954 Cruise O'Matic (Borg-Warner MX) 3-Speed automatic transmission
1955 Lincoln TurboDrive 3-Speed automatic transmission
1957 Ford Transmatic Drive 6-Speed Automatic Transmission for Medium-Duty and Heavy-Duty Trucks
1958 Cruise-O-Matic 3-Speed automatic transmission
1958 Edsel Mile-O-Matic 2-Speed automatic transmission
1958 Mercury Multi-Drive
1958 Lincoln TurboDrive 3-Speed automatic transmission
1958 Ford Type "A" Fluid
In 1959, Ford released their own Type-A automatic transmission fluid specification (M2C33-A) and stopped using GM fluid specifications for their in-house transmissions. The Ford M2C33-A fluid had GM Type "A" Suffix "A" characteristics. Transmission fluid service life was fairly short, and frequent transmission oil changes were required.
1959 Type "B" Fluid
In 1959, Ford released an updated automatic transmission fluid specification Type-B (M2C33-B). The Ford M2C33-B fluid had GM Type "A" Suffix "A" characteristics. As with the previous specification, transmission fluid service life was fairly short, and frequent transmission oil changes were required.
1960 Type "D" Fluid
In 1960, Ford introduced the Type-D (M2C33-D) specification for service fluid use in 1960 model-year vehicles. This fluid specification change provided better oxidation control, anti-wear performance, and higher static capacity capabilities were also included. Oxidation control of the fluid was measured by a new Merc-O-Matic oxidation test.
This fluid was first used in the following transmissions:
1964 C-4 3-Speed automatic transmission
1966 C-6 3-Speed automatic transmission
1968 FMX 3-Speed automatic transmission
1967 Type "F" Fluid
In 1967, Ford introduced a new fluid specification, the Type-F fluid (M2C33-F). This fluid provided a high static coefficient of friction which resulted in harsh shifting.
The Type-F fluid specification was intended to produce a “lifetime” fluid that would never need to be changed. This is the first of many Ford “lifetime” fluids. The 1974 Ford Car Shop Manual reads "The automatic transmission is filled at the factory with "lifetime" fluid. If it is necessary to add or replace fluid, use only fluids that meet Ford Specification M2C33F.
1972 Type "G" Fluid
In 1972, Ford of Europe introduced a new fluid specification, the Type-G fluid (M2C33-G). This fluid was used through 1981.
This fluid was first used in the following transmissions:
Borg-Warner M35 transmissions and variants
1974 Type "CJ" Fluid
In September 1974, Ford introduced a new fluid specification, the Type-CJ fluid (M2C138-CJ). This fluid provided smoother shifting and less gear noise by with higher dynamic friction characteristics. The Ford Type-CJ fluid specification also met the GM Dexron-II(D) and earlier fluid specifications. Ford was a licensed GM Dexron-II(D) vendor.
The Ford Type-CJ fluid was compatible with GM Dexron II(D) specifications. This compatibility may suggest to some that all Ford, Mercon, and Dexron fluids are compatible; this is not correct. Always use the factory-recommended fluid for your transmission. (See the Aftermarket Automatic Transmission Fluids section below)
This fluid was first used in the following transmissions:
1974 C-3 3-Speed automatic transmission in the Pinto
1978 ATX 3-Speed automatic transmission
1980 ATX 3-Speed automatic transmission with a Centrifugally Linked Clutch (CLC) in the torque converter
1980 Jatco 3-Speed automatic transmission
1980 ATX 3-Speed automatic transmission with a Fluid Linked Clutch (FLC) in the torque converter
1980 AOD 4-Speed overdrive automatic transmission with torque converter bypass (Ford's first overdrive 4-speed)
1983 ZF-4HP33 4-Speed overdrive automatic transmission (Dexron-II(D))
1981 Type "H" Fluid
As a result of the 1973 OPEC Oil Embargo and fuel shortages, the U.S. government created the Corporate Average Fuel Economy (CAFE) regulations in 1975. The regulations were to be fully implemented by the 1978 model year. The automotive industry responded by changing to three typically unused transmission technologies:
A 4th gear (overdrive)
A Torque Converter Clutch (TCC)
Front Wheel Drive (FWD).
The introduction of the TCC led to customer complaints of a shudder while driving. All vehicle manufacturers made changes to their ATF specifications and the controls of their TCC to try and alleviate the problem. GM released the Dexron-II (D) fluid specification in 1978 and Chrysler released the ATF+2 fluid specification in 1980, and Ford released the Type-H fluid (M2C166-H) specification in June 1981.
The Type-H fluid specification provided improved friction characteristics in lock-up torque converters (reducing shudder during application and release). With this new specification, Ford introduced the aluminum beaker oxidation test (ABOT) to replace the older Merc-O-Matic oxidation test.
The Ford Type-H fluid was compatible with GM Dexron II(D) specifications. This compatibility may suggest to some that all Ford, Mercon, and Dexron fluids are compatible; this is not correct. Always use the factory-recommended fluid for your transmission. (See the Aftermarket Automatic Transmission Fluids section below)
This fluid was first used in the following transmissions:
1982 C-5 (C4 with Torque converter Clutch (TCC)) 3-Speed automatic transmission
1985 A4LD (C3 with overdrive) 4-Speed automatic transmission
1986 AXOD 4-Speed automatic transaxle
1986 Electronic A4LD 4-Speed automatic transmission
MERCON Fluids: 1987 Today
1987 MERCON
In January 1987, Ford released the original Mercon fluid specification (M2C185-A). Mercon became a trademarked fluid with the qualification and licensing of fluids to ensure quality in the marketplace. This original Mercon Specification was backward compatible with the 1981 Ford Type-H fluid and the 1958 GM Type "A" Suffix "A" fluid.
NOTICE: This version of Mercon was compatible with GM's Dexron-II(D) and later formulations were compatible with Dexron-III(H); however, Future versions of Mercon (Mercon V, Mercon SP, Mercon LV, Mercon ULV) are not compatible with GM's Dexron-III(H) or any newer version of Dexron (Dexron-VI, Dexron HP, Dexron ULV).
This fluid was first used in the following transmissions:
1989 E4OD (C-6 with overdrive) Ford's first electronic control 4-speed automatic transmission
1990 4EAT-G Mazda 4-Speed automatic transmission
1990 F-4EAT 4-speed automatic transmission
1990 AXOD-E 4-speed automatic transaxle
1992 AOD-E (Electronic AOD) 4-speed automatic transmission
1993 AOD-EW/4R70W 4-speed automatic transmission
1994 AX4S 4-speed automatic transaxle
1994 CD4E Batavia 4-Speed automatic transmission
1995 AX4N/4F50N 4-Speed automatic transmission
1995 4R44E 4-speed automatic transmission
1995 4R55E 4-speed automatic transmission
1997 5R44 5-speed automatic transmission (Ford's first 5-speed automatic transmission)
1997 5R55 5-speed automatic transmission
1996 MERCON V
In 1996, Ford released the Mercon "V" fluid specification (M2C202-B). Ford Technical Service Bulletin (TSB) 06-14-04 indicates that Mercon "V" is to replace the original Mercon fluid.
This fluid was first used in the following transmissions:
1997 4R70W 4-speed automatic transmission
1998 4R100 4-speed automatic transmission
2000 4F27E 4-speed automatic transaxle
The Mercon "V" specification was revised in 2002 (M2C919-E). This revised fluid was first used in the following transmissions:
2003 4R75E 4-speed automatic transmission
2003 4R75W 4-speed automatic transmission
2003 5R110W 5-speed automatic transmission
2001 MERCON SP
In August 2001, Ford released the Mercon "SP" fluid specification (M2C919-D).
Ford SSM 21114 (November 26, 2009) indicates that Mercon Replace "SP" is to be replaced with Mercon LV on Torqshift transmissions from the 2003 through 2008 model years. This SSM does not apply to the ZF 6HP26 transmission.
This fluid first used in the following transmissions:
2001 5R110W Torque Shift 5-Speed automatic transmission
2005-2008 ZF 6HP26 6-Speed automatic transmission in Lincoln Navigator
2005 MERCON LV
In December 2005, Ford released the Mercon "LV" fluid specification (M2C938-A).
This fluid was first used in the following transmissions:
2006 6R60 ZF 6-Speed automatic transmission
2006 FNR5 Mazda 5-Speed automatic transmission
This specification was revised in 2007 for use in the following transmissions:
2007 6F50 6-speed automatic transaxle
2007 6R80 6-speed automatic transaxle
2009 6F35 6-speed automatic transaxle
This specification was revised again in 2010 (M2C938-A2) and was optimized for anti-Squawk performance of clutches. This revised fluid was first used in the following transmissions:
2011 6R140 6-speed automatic transmission
2013 HF-35 eCVT hybrid transaxle
2014 MERCON ULV
The fluid specification for Mercon-ULV (Ultra-Low Viscosity) was introduced on January 2, 2014. Mercon ULV is composed of a Group 3+ Base oil and additives needed for the proper operation of the 2017 and above Ford 10R80 and the GM 10L90 10-Speed rear wheel drive automatic transmission.
This transmission and the transmission fluid specification was co-developed by Ford and GM. The current specification that defines the fluid is FORD WSS-M2C949-A. This fluid is also marketed as Dexron ULV.
NOTICE: The quart containers of Mercon ULV must be shaken to stir up the additives before pouring. This fluid is not backward compatible with any previous fluids.
This fluid was first used in the following transmissions:
2017 10R80 10-speed automatic transmission
2017 6F15 6-speed automatic transaxle
2017 6R100 6-speed automatic transmission
Ford "Lifetime" ATF
The 1967 Ford Type-F fluid specification was intended to produce a “lifetime” fluid which would never need to be changed. This was the first of many Ford “lifetime” fluids. The 1974 Ford Car Shop Manual reads "The automatic transmission is filled at the factory with "lifetime" fluid. If it is necessary to add or replace fluid, use only fluids which meet Ford Specification M2C33F. Many other transmission manufacturers have followed with their own "Lifetime" automatic transmission fluids".
Example Maintenance Schedule
Lifetime automatic transmission fluids made from higher quality base oil and an additive package are more chemically stabile, less reactive, and do not experience oxidation as easily as lower quality fluids made from lower quality base oil and an additive package. Therefore, higher quality transmission fluids can last a long time in normal driving conditions (Typically 100,000 miles (160,000 km) or more).
The definition of 'Lifetime Fluid" differs from transmission manufacturer to transmission manufacturer. Always consult the vehicle maintenance guide for the proper service interval for the fluid in your transmission and your driving conditions.
2018 Ford F-150 Example: According to the Scheduled Maintenance Guide of a 2018 Ford F-150 with "Lifetime Fluid" could have three different fluid service intervals depending on how the vehicle is driven:
1. Normal Driving
Normal commuting with highway driving
No or moderate load or towing
Flat to moderately hilly roads
No extended idling
Under these driving conditions, the automatic transmission fluid needs to be serviced after every 150,000 miles (240,000 km).
2. Severe Driving
Moderate to heavy load or towing
Mountainous or off-road conditions
Extended idling
Extended hot or cold operation
Under these driving conditions, the automatic transmission fluid needs to be serviced after every 30,000 miles (48,000 km).
3. Extreme Driving
Maximum load or towing
Extreme hot or cold operation
Under these driving conditions, the automatic transmission fluid needs to be serviced also after every 30,000 miles.
See also
Dexron, ATM brand by GM
Whale oil, an important constituent of ATF until 1974
References
External links
Transmission fluids (including "Mercon" products) at Motorcraft
A Look at Changes in Automatic Transmission Fluid
The History of Automatic Transmission Fluid - ATF History Part 1
69 Years of Ford Automatic Transmission Fluid - ATF History Part 3
Changing Gears: The Development of the Automotive Transmission
Ford Service Information Subscription Access
Ford transmissions
Hydraulic fluids
Automotive chemicals
Automobile transmissions
Petroleum based lubricants
Oils | MERCON | [
"Physics",
"Chemistry"
] | 3,598 | [
"Oils",
"Carbohydrates",
"Physical systems",
"Hydraulics",
"Hydraulic fluids"
] |
59,763,739 | https://en.wikipedia.org/wiki/List%20of%20novae%20in%202019 | The following is a list of all novae that are known to have occurred in 2019. A nova is an energetic astronomical event caused by a white dwarf accreting matter from a star it is orbiting (typically a red giant, whose outer layers are more weakly attached than smaller, denser stars) Alternatively, novae can be caused by a pair of stars merging with each other, however such events are vastly less common than novae caused by white dwarfs.
In 2019, at least sixteen Milky Way novae were discovered, eight of which were dwarf nova eruptions, one of the variable system V386 Serpentis, one from the known nova-like system 2E 1516.6-6827, and four from previously unidentified white dwarf binaries. One of these binaries, TCP J18200437-1033071, may have possibly been involved in another outburst in 1951. The recurrent nova V3890 Sgr, which had been seen to erupt in 1962 and 1990, also erupted again in 2019.
List of novae in 2019
In the Milky Way
In the Andromeda Galaxy
Novae are also frequently spotted in the Andromeda Galaxy, and are even slightly more commonly found than in the Milky Way, as there is less intervening dust to prevent their detection. Furthermore, Andromeda is circumpolar for observers north of latitude +48-50, roughly the latitude of the Canadian-American border, allowing observers north of that to search for transients all year.
In 2019, 11 novae have been seen in the Andromeda galaxy.
In other galaxies
Any galaxy within 20 million light-years of the Sun could theoretically have nova events bright enough to be detected from Earth, although in practice most are only detected in galaxies within 10-15 million light-years of the Milky Way, such as the Triangulum Galaxy, Messier 81, Messier 82, Messier 83, and Messier 94.
In 2019, two novae were observed in Messier 81, and another in the Triangulum Galaxy. A luminous red nova was observed in the Whirlpool Galaxy (Messier 51a), probably caused by a merger of two stars.
See also
List of novae in 2018
Nova
Dwarf nova
Luminous red nova
Guest star (astronomy)
Supernova
Notes
References
External links
List of all galactic novae
2019 in outer space
Novae
Novae in 2019
Novae | List of novae in 2019 | [
"Astronomy"
] | 492 | [
"Astronomy-related lists",
"Novae",
"Astronomical events",
"Lists of astronomical events",
"Lists of astronomical objects",
"Astronomical objects"
] |
59,766,171 | https://en.wikipedia.org/wiki/AlphaFold | AlphaFold is an artificial intelligence (AI) program developed by DeepMind, a subsidiary of Alphabet, which performs predictions of protein structure. The program is designed as a deep learning system.
AlphaFold software has had three major versions. A team of researchers that used AlphaFold 1 (2018) placed first in the overall rankings of the 13th Critical Assessment of Structure Prediction (CASP) in December 2018. The program was particularly successful at predicting the most accurate structure for targets rated as the most difficult by the competition organisers, where no existing template structures were available from proteins with a partially similar sequence. A team that used AlphaFold 2 (2020) repeated the placement in the CASP14 competition in November 2020. The team achieved a level of accuracy much higher than any other group. It scored above 90 for around two-thirds of the proteins in CASP's global distance test (GDT), a test that measures the degree to which a computational program predicted structure is similar to the lab experiment determined structure, with 100 being a complete match, within the distance cutoff used for calculating GDT.
AlphaFold 2's results at CASP14 were described as "astounding" and "transformational". Some researchers noted that the accuracy is not high enough for a third of its predictions, and that it does not reveal the mechanism or rules of protein folding for the protein folding problem to be considered solved. Nevertheless, there has been widespread respect for the technical achievement. On 15 July 2021 the AlphaFold 2 paper was published in Nature as an advance access publication alongside open source software and a searchable database of species proteomes. The paper has since been cited more than 27 thousand times.
AlphaFold 3 was announced on 8 May 2024. It can predict the structure of complexes created by proteins with DNA, RNA, various ligands, and ions. The new prediction method shows a minimum 50% improvement in accuracy for protein interactions with other molecules compared to existing methods. Moreover, for certain key categories of interactions, the prediction accuracy has effectively doubled.
Demis Hassabis and John Jumper from the team that developed AlphaFold won the Nobel Prize in Chemistry in 2024 for their work on “protein structure prediction”. The two had won the Breakthrough Prize in Life Sciences and the Albert Lasker Award for Basic Medical Research earlier in 2023.
Background
Proteins consist of chains of amino acids which spontaneously fold to form the three dimensional (3-D) structures of the proteins. The 3-D structure is crucial to understanding the biological function of the protein.
Protein structures can be determined experimentally through techniques such as X-ray crystallography, cryo-electron microscopy and nuclear magnetic resonance, which are all expensive and time-consuming. Such efforts, using the experimental methods, have identified the structures of about 170,000 proteins over the last 60 years, while there are over 200 million known proteins across all life forms.
Over the years, researchers have applied numerous computational methods to predict the 3D structures of proteins from their amino acid sequences, but the accuracy of such methods has not been close to experimental techniques. CASP, which was launched in 1994 to challenge the scientific community to produce their best protein structure predictions, found that GDT scores of only about 40 out of 100 can be achieved for the most difficult proteins by 2016. AlphaFold started competing in the 2018 CASP using an artificial intelligence (AI) deep learning technique.
Algorithm
DeepMind is known to have trained the program on over 170,000 proteins from the Protein Data Bank, a public repository of protein sequences and structures. The program uses a form of attention network, a deep learning technique that focuses on having the AI identify parts of a larger problem, then piece it together to obtain the overall solution. The overall training was conducted on processing power between 100 and 200 GPUs.
AlphaFold 1 (2018)
AlphaFold 1 (2018) was built on work developed by various teams in the 2010s, work that looked at the large databanks of related DNA sequences now available from many different organisms (most without known 3D structures), to try to find changes at different residues that appeared to be correlated, even though the residues were not consecutive in the main chain. Such correlations suggest that the residues may be close to each other physically, even though not close in the sequence, allowing a contact map to be estimated. Building on recent work prior to 2018, AlphaFold 1 extended this to estimate a probability distribution for just how close the residues might be likely to be—turning the contact map into a likely distance map. It also used more advanced learning methods than previously to develop the inference.
AlphaFold 2 (2020)
The 2020 version of the program (AlphaFold 2, 2020) is significantly different from the original version that won CASP 13 in 2018, according to the team at DeepMind.
The software design used in AlphaFold 1 contained a number of modules, each trained separately, that were used to produce the guide potential that was then combined with the physics-based energy potential. AlphaFold 2 replaced this with a system of sub-networks coupled together into a single differentiable end-to-end model, based entirely on pattern recognition, which was trained in an integrated way as a single integrated structure. Local physics, in the form of energy refinement based on the AMBER model, is applied only as a final refinement step once the neural network prediction has converged, and only slightly adjusts the predicted structure.
A key part of the 2020 system are two modules, believed to be based on a transformer design, which are used to progressively refine a vector of information for each relationship (or "edge" in graph-theory terminology) between an amino acid residue of the protein and another amino acid residue (these relationships are represented by the array shown in green); and between each amino acid position and each different sequences in the input sequence alignment (these relationships are represented by the array shown in red). Internally these refinement transformations contain layers that have the effect of bringing relevant data together and filtering out irrelevant data (the "attention mechanism") for these relationships, in a context-dependent way, learnt from training data. These transformations are iterated, the updated information output by one step becoming the input of the next, with the sharpened residue/residue information feeding into the update of the residue/sequence information, and then the improved residue/sequence information feeding into the update of the residue/residue information. As the iteration progresses, according to one report, the "attention algorithm ... mimics the way a person might assemble a jigsaw puzzle: first connecting pieces in small clumps—in this case clusters of amino acids—and then searching for ways to join the clumps in a larger whole."
The output of these iterations then informs the final structure prediction module, which also uses transformers, and is itself then iterated. In an example presented by DeepMind, the structure prediction module achieved a correct topology for the target protein on its first iteration, scored as having a GDT_TS of 78, but with a large number (90%) of stereochemical violations – i.e. unphysical bond angles or lengths. With subsequent iterations the number of stereochemical violations fell. By the third iteration the GDT_TS of the prediction was approaching 90, and by the eighth iteration the number of stereochemical violations was approaching zero.
The training data was originally restricted to single peptide chains. However, the October 2021 update, named AlphaFold-Multimer, included protein complexes in its training data. DeepMind stated this update succeeded about 70% of the time at accurately predicting protein-protein interactions.
AlphaFold 3 (2024)
Announced on 8 May 2024, AlphaFold 3 was co-developed by Google DeepMind and Isomorphic Labs, both subsidiaries of Alphabet. AlphaFold 3 is not limited to single-chain proteins, as it can also predict the structures of protein complexes with DNA, RNA, post-translational modifications and selected ligands and ions.
AlphaFold 3 introduces the "Pairformer", a deep learning architecture inspired from the transformer, considered similar but simpler than the Evoformer introduced with AlphaFold 2. The raw predictions from the Pairformer module are passed to a diffusion model, which starts with a cloud of atoms and uses these predictions to iteratively progress towards a 3D depiction of the molecular structure.
The AlphaFold server was created to provide free access to AlphaFold 3 for non-commercial research.
Competitions
CASP13
In December 2018, DeepMind's AlphaFold placed first in the overall rankings of the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP).
The program was particularly successfully predicting the most accurate structure for targets rated as the most difficult by the competition organisers, where no existing template structures were available from proteins with a partially similar sequence. AlphaFold gave the best prediction for 25 out of 43 protein targets in this class, achieving a median score of 58.9 on the CASP's global distance test (GDT) score, ahead of 52.5 and 52.4 by the two next best-placed teams, who were also using deep learning to estimate contact distances. Overall, across all targets, the program achieved a GDT score of 68.5.
In January 2020, implementations and illustrative code of AlphaFold 1 was released open-source on GitHub. but, as stated in the "Read Me" file on that website: "This code can't be used to predict structure of an arbitrary protein sequence. It can be used to predict structure only on the CASP13 dataset (links below). The feature generation code is tightly coupled to our internal infrastructure as well as external tools, hence we are unable to open-source it." Therefore, in essence, the code deposited is not suitable for general use but only for the CASP13 proteins. The company has not announced plans to make their code publicly available as of 5 March 2021.
CASP14
In November 2020, DeepMind's new version, AlphaFold 2, won CASP14. Overall, AlphaFold 2 made the best prediction for 88 out of the 97 targets.
On the competition's preferred global distance test (GDT) measure of accuracy, the program achieved a median score of 92.4 (out of 100), meaning that more than half of its predictions were scored at better than 92.4% for having their atoms in more-or-less the right place, a level of accuracy reported to be comparable to experimental techniques like X-ray crystallography. In 2018 AlphaFold 1 had only reached this level of accuracy in two of all of its predictions. 88% of predictions in the 2020 competition had a GDT_TS score of more than 80. On the group of targets classed as the most difficult, AlphaFold 2 achieved a median score of 87.
Measured by the root-mean-square deviation (RMS-D) of the placement of the alpha-carbon atoms of the protein backbone chain, which tends to be dominated by the performance of the worst-fitted outliers, 88% of AlphaFold 2's predictions had an RMS deviation of less than 4 Å for the set of overlapped C-alpha atoms. 76% of predictions achieved better than 3 Å, and 46% had a C-alpha atom RMS accuracy better than 2 Å, with a median RMS deviation in its predictions of 2.1 Å for a set of overlapped CA atoms. AlphaFold 2 also achieved an accuracy in modelling surface side chains described as "really really extraordinary".
To additionally verify AlphaFold-2 the conference organisers approached four leading experimental groups for structures they were finding particularly challenging and had been unable to determine. In all four cases the three-dimensional models produced by AlphaFold 2 were sufficiently accurate to determine structures of these proteins by molecular replacement. These included target T1100 (Af1503), a small membrane protein studied by experimentalists for ten years.
Of the three structures that AlphaFold 2 had the least success in predicting, two had been obtained by protein NMR methods, which define protein structure directly in aqueous solution, whereas AlphaFold was mostly trained on protein structures in crystals. The third exists in nature as a multidomain complex consisting of 52 identical copies of the same domain, a situation AlphaFold was not programmed to consider. For all targets with a single domain, excluding only one very large protein and the two structures determined by NMR, AlphaFold 2 achieved a GDT_TS score of over 80.
CASP15
In 2022, DeepMind did not enter CASP15, but most of the entrants used AlphaFold or tools incorporating AlphaFold.
Reception
AlphaFold 2 scoring more than 90 in CASP's global distance test (GDT) is considered a significant achievement in computational biology and great progress towards a decades-old grand challenge of biology. Nobel Prize winner and structural biologist Venki Ramakrishnan called the result "a stunning advance on the protein folding problem", adding that "It has occurred decades before many people in the field would have predicted. It will be exciting to see the many ways in which it will fundamentally change biological research."
Propelled by press releases from CASP and DeepMind, AlphaFold 2's success received wide media attention. As well as news pieces in the specialist science press, such as Nature, Science, MIT Technology Review, and New Scientist, the story was widely covered by major national newspapers,. A frequent theme was that ability to predict protein structures accurately based on the constituent amino acid sequence is expected to have a wide variety of benefits in the life sciences space including accelerating advanced drug discovery and enabling better understanding of diseases. Some have noted that even a perfect answer to the protein prediction problem would still leave questions about the protein folding problem—understanding in detail how the folding process actually occurs in nature (and how sometimes they can also misfold).
In 2023, Demis Hassabis and John Jumper won the Breakthrough Prize in Life Sciences as well as the Albert Lasker Award for Basic Medical Research for their management of the AlphaFold project. Hassabis and Jumper proceeded to win the Nobel Prize in Chemistry in 2024 for their work on “protein structure prediction” with David Baker of the University of Washington.
Source code
The open access to source code of several AlphaFold versions (excluding AlphaFold 3) has been provided by DeepMind after requests from the scientific community. Full source code of AlphaFold-3 is expected to be provided to open access by the end of 2024.
Database of protein models generated by AlphaFold
The AlphaFold Protein Structure Database was launched on July 22, 2021, as a joint effort between AlphaFold and EMBL-EBI. At launch the database contains AlphaFold-predicted models of protein structures of nearly the full UniProt proteome of humans and 20 model organisms, amounting to over 365,000 proteins. The database does not include proteins with fewer than 16 or more than 2700 amino acid residues, but for humans they are available in the whole batch file. AlphaFold planned to add more sequences to the collection, the initial goal (as of beginning of 2022) being to cover most of the UniRef90 set of more than 100 million proteins. As of May 15, 2022, 992,316 predictions were available.
In July 2021, UniProt-KB and InterPro has been updated to show AlphaFold predictions when available.
On July 28, 2022, the team uploaded to the database the structures of around 200 million proteins from 1 million species, covering nearly every known protein on the planet.
Limitations
AlphaFold has various limitations:
AlphaFold DB provides monomeric models of proteins, rather than their biologically relevant complexes.
Many protein regions are predicted with low confidence score, including the intrinsically disordered protein regions.
Alphafold-2 was validated for predicting structural effects of mutations with a limited success.
The model relies to some degree upon co-evolutionary information across similar proteins, and thus may not perform well on synthetic proteins or proteins with very low homology to anything in the database.
The ability of the model to produce multiple native conformations of proteins is limited.
AlphaFold 3 version can predict structures of protein complexes with a very limited set of selected cofactors and co- and post-translational modifications. Between 50% and 70% of the structures of the human proteome are incomplete without covalently-attached glycans. AlphaFill, a derived database, adds cofactors to AlphaFold models where appropriate.
In the algorithm, the residues are moved freely, without any restraints. Therefore, during modeling the integrity of the chain is not maintained. As a result, AlphaFold may produce topologically wrong results, like structures with an arbitrary number of knots.
Applications
AlphaFold has been used to predict structures of proteins of SARS-CoV-2, the causative agent of COVID-19. The structures of these proteins were pending experimental detection in early 2020. Results were examined by the scientists at the Francis Crick Institute in the United Kingdom before release into the larger research community. The team also confirmed accurate prediction against the experimentally determined SARS-CoV-2 spike protein that was shared in the Protein Data Bank, an international open-access database, before releasing the computationally determined structures of the under-studied protein molecules. The team acknowledged that although these protein structures might not be the subject of ongoing therapeutical research efforts, they will add to the community's understanding of the SARS-CoV-2 virus. Specifically, AlphaFold 2's prediction of the structure of the ORF3a protein was very similar to the structure determined by researchers at University of California, Berkeley using cryo-electron microscopy. This specific protein is believed to assist the virus in breaking out of the host cell once it replicates. This protein is also believed to play a role in triggering the inflammatory response to the infection.
Published works
Andrew W. Senior et al. (December 2019), "Protein structure prediction using multiple deep neural networks in the 13th Critical Assessment of Protein Structure Prediction (CASP13)", Proteins: Structure, Function, Bioinformatics 87(12) 1141–1148
Andrew W. Senior et al. (15 January 2020), "Improved protein structure prediction using potentials from deep learning", Nature 577 706–710
John Jumper et al. (December 2020), "High Accuracy Protein Structure Prediction Using Deep Learning", in Fourteenth Critical Assessment of Techniques for Protein Structure Prediction (Abstract Book), pp. 22–24
John Jumper et al. (December 2020), "AlphaFold 2". Presentation given at CASP 14.
See also
Folding@home
IBM Blue Gene
Foldit
Rosetta@home
Human Proteome Folding Project
AlphaZero
AlphaGo
AlphaGeometry
Predicted Aligned Error
References
Further reading
Carlos Outeiral, CASP14: what Google DeepMind's AlphaFold 2 really achieved, and what it means for protein folding, biology and bioinformatics, Oxford Protein Informatics Group. (3 December)
Mohammed AlQuraishi, AlphaFold2 @ CASP14: "It feels like one's child has left home." (blog), 8 December 2020
Mohammed AlQuraishi, The AlphaFold2 Method Paper: A Fount of Good Ideas (blog), 25 July 2021
External links
AlphaFold-3 web server
Open access to protein structure predictions for the human proteome and 20 other key organisms at European Bioinformatics Institute (AlphaFold Protein Structure Database)
CASP 14 website
AlphaFold: The making of a scientific breakthrough, DeepMind, via YouTube.
ColabFold (), version for homooligomeric prediction and complexes
Bioinformatics software
Applied machine learning
Protein folding
Deep learning software applications
Molecular modelling software
Google DeepMind | AlphaFold | [
"Chemistry",
"Biology"
] | 4,058 | [
"Molecular modelling software",
"Computational chemistry software",
"Bioinformatics software",
"Bioinformatics",
"Molecular modelling"
] |
59,767,090 | https://en.wikipedia.org/wiki/Boom%20Overture | The Boom Overture is a supersonic airliner under development by Boom Technology, designed to cruise at Mach 1.7 or . It will accommodate 64 to 80 passengers, depending on the configuration, and have a range of . Boom Technology aims to introduce the Overture in 2029. The company projects a market for up to 1,000 supersonic airliners, serving 500 viable routes, with fares comparable to business class. Featuring a delta wing design reminiscent of the Concorde, the Overture will utilize composite materials in its construction. A 2022 redesign specified four dry (non-afterburning) turbofan engines, each producing of thrust.
Market
The company says that five hundred daily routes would be viable: at Mach 1.7 over water, Newark and London would be 3 hours and 30 minutes apart; Newark and Frankfurt would be 4 hours apart. With range, transpacific flights would require a refueling stop: San Francisco and Tokyo would be 6 hours apart. There could be a market for 1,000 supersonic airliners by 2035. Boom targets a $200 million price, not discounted and excluding options and interior, in 2016 dollars. The company claims that operational costs per premium available seat mile will be lower than subsonic wide-body aircraft. The Boom factory will be sized to assemble up to 100 aircraft per year for a 1,000- to 2,000-aircraft potential market over 10 years.
Boom plans to target $5,000 fares for a New York-to-London round-trip, while the same on Concorde cost $20,000 adjusted for inflation; it was its only profitable route. The same fuel burn enables fares similar to subsonic business class among other factors. For long-range routes like San Francisco–Tokyo and Los Angeles–Sydney, 30 lie-flat first-class seats could be proposed alongside 15 business-class seats.
In March 2016, Richard Branson confirmed that Virgin Group held options for 10 aircraft, and Virgin Galactic's subsidiary The Spaceship Company will aid in manufacturing and testing the jet. However, in 2023, Virgin Group announced that its purchase options had expired. An unnamed European carrier also holds options for 15 aircraft; the two deals total 5 billion dollars. At the 2017 Paris Air Show, 51 commitments were added for a backlog of 76 with significant deposits. In December 2017, Japan Airlines was confirmed to have pre-ordered up to 20 jets among the commitments to 76 from five airlines. Boom CEO Blake Scholl thinks 2,000 supersonic jets will connect 500 cities and one-way tickets between London and New York will be priced around £2,000, comparable with existing subsonic business class.
On June 3, 2021, United Airlines announced it had signed an agreement to purchase 15 Overture aircraft with an additional 35 options, expecting to start passenger flights by 2029. On August 16, 2022, American Airlines announced an agreement to purchase 20 Overture aircraft with an additional 40 options.
Order summary
Development
By March 2016, the company had created concept drawings and wooden mockups of parts of the aircraft.
In October 2016, the design was stretched to to seat up to 50 passengers with ten extra seats, its wingspan marginally increased, and a third engine was added to enable ETOPS with up to a 180 minutes diversion time. The plane could seat 55 passengers in a higher-density configuration. In June 2017, its introduction was scheduled for 2023. By July 2018, it was delayed to 2025. At the time, it had undergone over 1,000 simulated wind tunnel tests.
Boom initially targeted a Mach 2.2 cruise speed to fit with transoceanic airline timetables and allow higher utilization, while keeping airport noise to Stage 4, similar to subsonic long-range aircraft. The plane configuration was intended to be locked in late 2019 to early 2020 for a launch with engine selection, supply chain, production site. Development and certification of the airliner and its engine were estimated at $6 billion, requiring Series C investors. Enough money was raised in the B round of fundraising to be able to hit key milestones, including flying the demonstrator (XB-1) to prove the technology, building up an order backlog, finding key suppliers for engines, aerostructures, and avionics, and lay out the certification process, with many special conditions but with precedents.
At the June 2019 Paris Air Show, Boom CEO Blake Scholl announced the introduction of the Overture was delayed from 2023 to the 2025–2027 timeframe, following a two-year test campaign with six aircraft. In September 2020, the company announced it has been contracted by the United States Air Force to develop the Overture for possible use as Air Force One.
On October 7, 2020, Boom publicly unveiled its XB-1 demonstrator, which it planned to fly for the first time in 2021 from Mojave Air and Space Port, California. It expected to begin wind tunnel tests for the Overture in 2021, and start construction of a manufacturing facility in 2022, with the capacity to produce 5 to 10 aircraft monthly. The first Overture would be unveiled in 2025, with the aim of achieving type certification by 2029. Flights should be available in 2030, as estimated by Blake Scholl.
Boom currently targets a slower Mach 1.7 cruise. In January 2022, Boom announced a grant of US$60m from the US Air Force’s AFWERX program to further develop the Boom Overture supersonic airliner. In July 2022, Boom announced a partnership with Northrop Grumman to develop a 'special mission' variant for the U.S. Government and its allies. As of January 2022, the Overture's first flight is planned for 2026 with introduction into service expected in 2029.
On July 19, 2022, Boom unveiled a revised proposal for the production version of the Overture at the Farnborough Airshow. This version has four engines and a tailed delta wing.
On December 13, 2022, Boom announced that it would develop its own turbofan engine after "Big Three" engine manufacturers Rolls-Royce, Pratt & Whitney and General Electric, as well as CFM and Safran previously declined to develop a new engine due to high capital costs. Named Symphony, the engine will be developed under partnership with three entities: Kratos subsidiary Florida Turbine Technologies for engine design; StandardAero for maintenance; and General Electric subsidiary GE Additive for consulting on printing components.
Design
Boom's original design for Overture was a trijet, which resembled a 75% scale model of Concorde and the XB-1 "Baby Boom" test vehicle was designed and built on this basis, which took its first flight in March 2024. However, in mid-2022, the company announced a radical redesign of Overture into a quadjet, to closely resemble the unsuccessful Boeing B-2707-300 design from the 1970s.
A major change is that the new design features four large external engine pods rather than the two more compact engine 'box' nacelles, used on Concorde. This design has not been seen in high speed aircraft since the Convair B-58 Hustler bomber of the 1960s, due to high supersonic wave drag implications. It also now features a small horizontal stabilizer. Due to the low 1.5 wing aspect ratio, low-speed drag is high, and the aircraft requires high thrust at take-off. Boom also needs to address the nose-up attitude on landing. Airframe maintenance costs are expected to be similar to those of other carbon fiber airliners. The Overture should have lower fuel burn than Concorde by relying on dry (no afterburner) engines, composite structures, and improved technology since Concorde's development, although until Overture flies, Concorde remains the only Mach 2.0 supercruising aircraft in history and carried 30% more passengers than Boom is currently projecting.
In 2017 the FAA and International Civil Aviation Organization (ICAO) were working on a sonic boom standard to allow supersonic flights overland. NASA plans to fly its Low Boom Flight Demonstrator for the first time in 2022 to assess public acceptability of a 75 boom, lower than Concorde's 105 PNLdB. The Overture is expected to not be louder at take-off than current airliners like the Boeing 777-300ER. Supersonic jets could be exempted from the FAA takeoff noise regulations, reducing their fuel consumption by 20–30% by using narrower engines optimized for acceleration over limiting noise. In 2017, Honeywell and NASA tested predictive software and cockpit displays showing the sonic booms en route, to minimize its disruption overland.
Design changes announced in July 2022 included an increase in the number of engines to four to allow for smaller less technically challenging engines and to allow takeoff at derated levels to lower noise, and redesigned gull form wing and fuselage to reduce drag.
Engines
The Boom Symphony is planned as a two-spool medium-bypass turbofan engine for use on Overture. The engine is intended to produce 35,000 pounds (160 kN) of thrust at takeoff, sustain Overture supercruise at Mach 1.7, and burn sustainable aviation fuel as an option.
Boom announced in December 2022 that development of the engine will be conducted in partnership with Kratos subsidiary Florida Turbine Technologies for engine design, GE Aerospace subsidiary GE Additive for additive manufacturing consulting, and StandardAero for maintenance. FTT/KTT is currently a maker of microturbines for drones and cruise missiles.
Boom aims for initial production of the engine to begin in 2024 at the Overture Superfactory at Greensboro, North Carolina.
Environment
Drag increases (and therefore fuel efficiency decreases) with cruising speed, and there is a particularly severe increase in drag around the sound barrier. Boom agrees that the fuel efficiency of the aircraft will be higher than subsonic competition, but states that operators of the aircraft "must use sustainable aviation fuel (SAF) and/or purchase high-quality carbon removal credits" to reduce the environmental impact. However, sustainable aviation fuel is not yet widely available, with large-scale production relying on technology that does not yet exist, and carbon-offsetting schemes have been widely criticized as being unable to deliver net-zero.
Specifications
See also
References
External links
Supersonic transports
Proposed aircraft of the United States
Quadjets
Overture
Low-wing aircraft
Aircraft with retractable tricycle landing gear
Inverted gull-wing aircraft | Boom Overture | [
"Physics"
] | 2,128 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
64,705,468 | https://en.wikipedia.org/wiki/Applications%20of%20sensitivity%20analysis%20to%20environmental%20sciences | Sensitivity analysis studies the relationship between the output of a model and its input variables or assumptions. Historically, the need for a role of sensitivity analysis in modelling, and many applications of sensitivity analysis have originated from environmental science and ecology.
Early works
Hydrology and water quality are two modelling fields where sensitivity analysis was applied quite early. Relevant examples are the work of Bruce Beck,
George M. Hornberger, Keith Beven and Robert C. Spear.
Other applications
More recent applications encompass snow avalanche models, land depletion, marine biogeochemical modelling, irrigation and again hydrological modelling.
Methods
Several methods related sensitivity analysis have been developed in the context of environmental applications, such as Data Based Mechanistic Model due to Peter Young and VARS due to S. Razavi and H. V.Gupta.
Prevalence across disciplines
In a 2019 work on the take-up of sensitivity analysis in different disciplines, among 19 different subject areas, environmental sciences were found to have the highest number of papers, which become even higher if the papers in earth sciences are included.
Journals
Reference journals for applications of sensitivity analysis in environmental science are Environmental Modelling & Software, Water Resources Research, Water Research, Ecological indicators and others.
Checklists
Sensitivity analysis is part of recent checklists or guidelines for environmental modelling.
Forthcoming special issues
A Special Issue on Sensitivity analysis for environmental modelling in preparation.
References
Mathematical modeling | Applications of sensitivity analysis to environmental sciences | [
"Mathematics"
] | 278 | [
"Applied mathematics",
"Mathematical modeling"
] |
70,582,100 | https://en.wikipedia.org/wiki/ANAIS-112 | ANAIS (Annual modulation with NaI Scintillators) is a dark matter direct detection experiment located at the Canfranc Underground Laboratory (LSC), in Spain, operated by a team of researchers of the CAPA at the University of Zaragoza.
ANAIS' goal is to confirm or refute in a model independent way the DAMA/LIBRA experiment positive result: an annual modulation in the low-energy detection rate having all the features expected for the signal induced by weakly interacting dark matter particles (WIMPs) in a standard galactic halo. This modulation is produced as a result of the Earth rotation around the Sun. A modulation with all the characteristic of a Dark Matter (DM) signal has been observed for about 20 years by DAMA/LIBRA, but it is in strong tension with the negative results of other DM direct detection experiments. Compatibility among the different experimental results in most conventional WIMP-DM scenarios is actually disfavored, but it is strongly dependent on the DM particle and halo models considered. A comparison using the same target material, NaI(Tl), is more direct and almost model-independent.
Experimental set up and performance
Source:
ANAIS-112 experimental setup consists of 112.5 kg of NaI(Tl), distributed in 9 cylindrical modules, 12.5 kg each and built by Alpha Spectra Inc., arranged in a 3 × 3 configuration.
Among the most relevant features of ANAIS- 112 modules, it is worth highlighting its remarkable optical quality, which combined to using high quantum efficiency Hamamatsu photomultipliers (PMTs) results in a very high light collection, at the level of 15 photoelectrons (phe) per keV in all the nine modules. The signals from the two PMTs coupled to each module are digitized at 2 GS/s in a 1.2 μs window with high resolution (14 bits). The trigger requires the coincidence of the two PMT trigger signals in a 200 ns window, while the PMT individual trigger is set at the single phe level.
Another interesting feature is a Mylar window in the middle of one of the lateral faces of the detectors, which allows to calibrate simultaneously the nine modules with external x-ray/gamma sources down to 10 keV in a radon-free environment. A careful low energy calibration of the region of interest (ROI), from 1 to 6 keV, is carried out by combining information from external calibrations and background. External calibrations with a 109Cd source are performed every two weeks, and every 1.5 months energy depositions at 3.2 and 0.87 keV from 40K and 22Na internal contaminations in one ANAIS module are selected by profiting from the coincidence with a high energy gamma in a second module.
The ANAIS-112 experiment is installed inside a shielding consisting of an inner layer of 10 cm of archaeological lead and an outer layer of 20 cm of low activity lead. This lead shielding is encased into an anti-radon box, tightly closed and kept under overpressure with radon-free nitrogen gas. The external layer of the shielding (the neutron shielding) consists of 40 cm of a combination of water tanks and polyethylene bricks. An active veto made up of 16 plastic scintillators is placed between the anti-radon box and the neutron shielding, covering the top and sides of the set-up allowing to effectively tag the residual muon flux onsite along the ANAIS-112 data taking.
ANAIS-112 was commissioned during the spring of 2017 and it started the data-taking phase at the hall B of the LSC on 3 August 2017 under 2450 m.w.e. rock overburden. The "live time" of the experiment, useful for analysis, is more than 95%, allowing for the high duty cycle achieved. Down time is mostly due to the periodical calibration of the modules.
A background understanding has been achieved, except in the [1-2] keV energy region, where the background model underestimates the measured event rate. Crystal bulk contamination is the dominant background source, being 210Pb, 40K, 22Na, 3H contributions the most relevant ones in the region of interest. Considering altogether the nine ANAIS-112 modules, the average background in the ROI is 3.6 cpd/kg/keV after three years of data taking, while DAMA/LIBRAphase2 background is below 0.80 cpd/kg/keV in the[1–2] keV energy interval, below 0.24 cpd/kg/keV in the [2–3] keV energy interval, and below 0.12 cpd/kg/keV in the [3–4] keV energy interval.
Annual modulation analysis and results
The development of filtering protocols based on the pulse shape and light sharing among the two PMTs has been crucial to fulfill the ANAIS-112 goal since the trigger rate in the ROI is dominated by non-bulk scintillation events. The determination of the corresponding efficiency is very important, and it is calculated using 109Cd, 40K and 22Na events. It is very close to 100% down to 2 keV, and then decreases steeply to about 15% at 1 keV, where the analysis threshold is set.
A blind protocol for the annual modulation analysis of ANAIS-112 data has been applied: single-hit events in the ROI are kept blinded during the event selection. Up to now, three unblindings of the data have been carried out: at 1.5 years, at 2 years, and 3 years, which correspond to exposures of 157.55, 220.69, and 313.95 kg×y, respectively. ANAIS-112 annual modulation search is performed in the same regions explored by DAMA/LIBRA collaboration, [1–6] keV and [2–6] keV, fixing the period to 1 year and the maximum of the modulation to 2 June.
To evaluate the statistical significance of a possible modulation in ANAIS–112 data, the events rate of the nine detectors is calculated in 10-days bins, and it is minimized χ2 = Σi (ni − μi)2/σ2i, where ni is the number of events in the time bin ti (corrected by live time and detector efficiency), σi is the corresponding Poisson uncertainty, accordingly corrected, and μi is the expected number of events at that time bin, that depends on the background model and can be written as: μi = [R0φbkg(ti) + Smcos(ω(ti − t0))]M∆E∆t.
Here, R0 represents the non-modulated rate in the experiment, is the probability distribution function (PDF) in time of any non-modulated component, Sm is the modulation amplitude, ω is fixed to 2π/365 d = 0.01721 rad d−1, t0 to −62.2 d (time origin has been taken on 3 August and then the cosine maximum is on 2 June), M is the total detector mass, ∆E is the energy interval width, and ∆t the time bin width. R0 is a free parameter, while Sm is either fixed to 0 (for the null hypothesis) or left unconstrained, positive or negative (for the modulation hypothesis).
The null hypothesis is well supported for the 3-years data in both energy regions, being the results for the two background models (a single exponential or a PDF based on the Monte Carlo background model) compatible. The standard deviation σ(Sm) is slightly lower when detectors are considered independently, as expected following a priori sensitivity analysis. Therefore, this fit is chosen to quote the ANAIS-112 annual modulation final result and sensitivity for three-year exposure. The best fits are incompatible with the DAMA/LIBRA result at 3.3 and 2.6 σ in [1-6] and [2-6] keV energy regions, for a sensitivity of 2.5 (2.7)σ at [1–6] keV ([2–6] keV). ANAIS-112 results for 1.5, 2 and 3 years of data-taking fully confirm the sensitivity projection.
ANAIS-112 results support the prospects of reaching a sensitivity above 3σ in 2022, within the scheduled 5 years of data taking.
Several consistency checks have been carried out (changing the number of detectors entering into the fit, considering only the first two years or the last two years, or changing the time bin size), concluding that there is no hint supporting relevant systematical uncertainties in the result. The performance of a large set of Monte Carlo pseudo-experiments sampled from the background model guarantees that the fit is not biased. A frequency analysis have also been conducted, and the conclusion is that there is no statistically significant modulation in the frequency range searched in the ANAIS-112 data.
Future prospects
ANAIS-112 sensitivity limitation is mostly due to the high background in the ROI, but in particular in the region from 1 to 2 keV. In this context, the application of machine learning techniques based on Boosted Decision Trees (BDTs), under development at present, could improve the rejection of these non-bulk scintillation events. Preliminary results point to a relevant sensitivity improvement. Extending the data taking for a few more years, could allow testing DAMA/LIBRA at the 5σ level. Operation at Canfranc Underground Laboratory has been granted until the end of 2025.
One possible systematics affecting the comparison between DAMA/LIBRA and ANAIS result is a possible different detector response to nuclear recoils, because both experiments are calibrated using x-rays/gammas. It is well known that scintillation is strongly quenched for energy deposited by nuclear recoils with respect to the same energy deposited by electrons. Measurements of Quenching Factors (QF) in NaI scintillators are affected by strong discrepancies. ANAIS-112 detectors QF are being determined after measurements at TUNL. In addition, a complete calibration program for the experiment using neutron sources onsite is being developed.
ANAIS-112 published results are available in open access at the webpage of the Dark Matter Data Center: https://www.origins-cluster.de/odsl/dark-matter-data-center/available-datasets/anais
Data are available upon request.
Funding Agencies
ANAIS experiment operation is presently financially supported by MICIU/AEI/10.13039/501100011033 (Grants No. PID2022-138357NB-C21 and PID2019-104374GB-I00), and Unión Europea NextGenerationEU/PRTR (AstroHEP) and the Gobierno de Aragón. Funding from Grant FPA2017-83133-P, Consolider-Ingenio 2010 Programme under grants MULTIDARK CSD2009-00064 and CPAN CSD2007-00042, the Gobierno de Aragón and the LSC Consortium made possible the setting-up of the detectors. The technical support from LSC and GIFNA staff as well as from Servicios de Apoyo a la Investigación de la Universidad de Zaragoza (SAIs) is warmly acknowledged.
External links
ANAIS Experiment Website
Canfranc Underground Laboratory Website
The DAMA project Website
The Dark Matter Data Center
References
Experiments for dark matter search | ANAIS-112 | [
"Physics"
] | 2,412 | [
"Dark matter",
"Experiments for dark matter search",
"Unsolved problems in physics"
] |
70,589,295 | https://en.wikipedia.org/wiki/Uranium%20mining%20in%20the%20Elliot%20Lake%20area | Uranium mining in the Elliot Lake area (prior to 1955, more commonly known as the Blind River area) represents one of two major uranium-producing areas in Ontario, and one of seven in Canada.
In the mid-1950s, the influx of people to Elliot Lake seeking uranium was described by engineer A. S. Bayne in a 1977 report as the "greatest uranium prospecting rush in the world".
Mining activities peaked around 1959 and 1960 to respond to US military demand for uranium during the Cold War.
By 1958, Canada had become one of the world's leading producers of uranium and the $274 million of uranium exports that year represented Canada's most significant mineral export. By 1963, the federal government had purchased more than $1.5 billion of uranium from Canadian producers for export. The opening of the mines and the workers they attracted led to the creation of the planned town of Elliot Lake.
US demand slumped in the early 1960s, but the increasing use of nuclear power for electricity-generation, in Canada and abroad, prompted some mines back into action.
Production slowed until the 1990s when it ceased. The Elliot Lake area now has ten decommissioned mines and 102 million tons of uranium tailings. Former miners have been left with a twofold increase in lung cancer development and mortality rates.
Area and nomenclature
The 200 square mile area north of Lake Huron that was Canada's largest uranium producing area has been referred to by various names as time passed, specifically Algoma, Blind River and Elliot Lake.
Algoma is the name of a wider district that includes this area. Blind River was initially the nearest human settlement, located 12 miles west of the nearest mine, until Elliot Lake was created, which is close to most of the mines.
The only road access to the town of Elliot Lake is via Ontario Highway 108.
Geology
Towards the end of the Wisconsin glaciation period, ice flowed approximately south (predominantly at 190°) across the area know known as Elliot Lake. Geologists believe that as the ice sheet retreated back north, it left a large proglacial lake just north of Elliot Lake, probably as part of the main Lake Algonquin. Today's features were created from sediments that sunk while the area was below the 335m deep lake. As the ice retreated, about 10,800 years ago, the ice holding the lake melted, causing the sand and gravel sediments to spill into the valleys.
Microscopic grains of uranium occur in ores of uraninite, brannerite and monazite amongst pyritic sheets of quartz-pebble rock.
History
Traditional territory
The area is the traditional territory of the Serpent River First Nation and also part of the Huron Robinson Treaty land.
In 2021, the Serpent River nation representatives described community consultation about mining activities as "minor."
19th century
Known at the time as the Blind River area, the Elliot Lake area is situated between the Sudbury Nickel mining area and the abandoned Bruce Mines and was subsequently prospected for gold and copper during the 19th century.
Uranium was first discovered in Canada by John Lawrence LeConte in 1847, who named the new mineral coracite. The exact location of his first discovery was unclear, but was understood to be approximately 70 miles north of Sault Ste. Marie on the shore of Lake Superior. A lack of an exact location and the absence of radioactivity detectors resulted in failures of surveyors or prospectors to repeat his find.
Mid 20th century – uranium discovery
In 1948, Karl Gunterman, financed by Aime Breton, with a Geiger counter discovered radioactive conglomerate near Lauzon Lake in Long Township, Ontario. Their discovery was investigated by geologist Franc R. Joubin, who in 1952 found a uranium deposit in Spragge.
In 1953, Joubin persuaded Joseph H. Hirshhorn to finance exploratory drilling and Hirshhorn signed a contract with Eldorado Mining and Refining Ltd, the Canadian Crown Corporation that bought all uranium in Canada; together they quickly started the Pronto Mine. News of the mine and the 1,400 stakes claimed by Joubin and Hirshorn resulted in a rush of prospectors to the area who filed 8,000 claims that summer. The uptick in uranium staking was known as the Backdoor Staking Bee. Mapping by W. H. Collins of the Geological Survey of Canada led to the discovery of more uranium around Quirke Lake and Elliot Lake (the lake proper, not the town of the same name). By 1958, Eldorado Mining and Refining Ltd estimated that the area had 320 million tons of uranium ore, with on average 2.38 pounds of uranium oxide per ton.
Throughout the 1950s, the majority of the world's uranium came from Elliot Lake, which became known as the "Uranium Capital of the World".
1957 saw bustling activity as contractors blasted paths through rock to make roads, sinking shafts and building uranium processing mills. According to the University of Waterloo's Earth Sciences Museum, "Never before in the history of Canada has so much money been spent so quickly in one place."
Throughout the 1950s, the people of the Anishinaabek First Nation of the Serpent River were systematically excluded from all decisions about resource extraction in their area.
Late 1950s boom
1958 was the first full year of mining production, and saw a $200 million of uranium sales, making uranium Canada's number one metal export, and Elliot Lake Canada's largest producer.
From 1959 to 1960, Elliot Lake organized town was created and other mines were constructed to meet the growing US demand for uranium.
In November 1959, the US announced its plans to stop stockpiling uranium and to cease procurement after 1962, resulting in the closure of five mines in 1960. However, by 1966 the global demand for uranium for energy purposes prompted increased production in the area, by 1970 the area had produced $1.3 billion of uranium oxide. Mining companies funded the creation of a Nuclear Museum.
The mines all started producing between 1955 and 1958, supplying US military needs.
1960s drop in demand
When the United States Atomic Energy Commission declared in 1959 that it would no longer stockpile uranium, and not renew procurement contracts beyond 1963, seven of the remaining nine mines closed. The other two mines, Denison and Nordic, remained open to supply Canadian federal uranium stockpiling needs while Pronto switched activities to supporting the nearby Pater copper mine. At the same time, Rio Algom Limited was created and became the owner of the seven closed mines, plus the Nordic and Pronto mines.
The mine closure resulted in the population of Elliot Lake town dropping from about 24,877 to 6,000 residents, having an immediate negative impact on the local economy.
Rio Algom later became a subsidiary of BHP.
1970s onwards
In early 1972, Australia, France, South Africa, and Rio Tinto Zinc formed a cartel to control the supply and pricing of uranium, using price fixing and bid rigging. This continued until the cartel was exposed by Friends of the Earth Australia in 1976.
The growing demand for uranium for nuclear power stations being built in the 1970s promoted Rio Algom to increase production at Quirke Mine and reopen Panel Mine in 1979 and later Stanleigh Mine (1983).
Decommissioning started from 1992 and concluded in 2001 when vegetation was added to Pronto Mine. Today, all mines are now fully decommissioned, meaning that mine openings are closed up, all buildings are removed and the sites have been revegetated.
Ontario Hydro cancelled its contract to buy uranium from Rio Algom in 1990 and from Denison Mines in 1992, although Stanleigh Mine continued production until June 1996.
Currently, Rio Algom owns nine of the mines (Stanleigh, Quirke, Panel, Spanish, American, Milliken, Lacnor, Buckles and Pronto) and Denison Mines owns the others.
As of 1980, Elliot Lake supplied 90% of the uranium used in Ontario.
Mining process
Mined ore consisted of pyritized quartz conglomerate with 0.1% to 0.2% uranium. The ore was acid leached to extract the uranium using sulphuric acid.
Tailings were neutralized before being deposited, however exposed tailings released acid and radium-226 before barium chloride and lime treatment was started in the 1970s.
Individual mines
Buckles Mine
Buckles mine is located on the south of the Quirke Lake syncline, close to the Nordic Mine. In 1955, Spanish American Mines Limited bought the mine from the original owner of the claim, Buckles Algoma Uranium Mines Limited.
The uranium ore was reported to be 486,500 tons, at 0.124% U3O8, located in a ten-feet-thick zone, 75 feet below the surface.
From 1958 onwards ore from the mine was processed at the Spanish American mine, where it was transported and treated at rate of approximately 500 tons per day. The mine closed in 1958 after all the ore had been extracted.
Twelve Mt of ore remains on the shared tailing management area with Nordic Mine under vegetative cover.
Can-Met Mine
Can-Met's location was first staked by Carl Mattaini who sold it to Can-Met Explorations Limited. The 1958 reporting indicated 8,362,069 tons of ore, which included 6,642,380 tons of uranium ore, with a partly proven average uranium grade of 1.832 pounds of uranium oxide per ton, after dilution. The mine is located on the south shore of Quirke Lake, 15 miles from Elliot Lake.
The mine had two shafts to 2,127 and 2,395 feet. There was a processing plant that could process 3,000 tons of ore per day built in October 1957. Tailings were deposited in the natural basin south of the mill.
Denison Mine
Denison Mine (also known as Consolidated Denison Mine) is located 10 miles north of Elliot Lake. It is just south of the Quirke Mine, and just west of the Panel and Can-Met Mines, just north of Spanish American and Stanrock mines. Following successful staking of the Pronto Mine property, mining claims were staked un the summer of 1953 by F. H. Jowsey, A. W. Stollerty and Associates. These stakes were purchased by Consolidated Denison Mines Limited in 1954. Denison undertook geological surveys and diamond drilling.
The mine started in September 1957, and there was mill on site to process 6,000 tons per day. The average production was 2,676 tons per day and the ore milled had an average of 2.63 pounds of uranium oxide per ton. 1957 estimates of ore reserves were of 136,787,400 tons above another zone 100-feet lower.
63 million tons of tailings were deposited in Williams Lake, Bear Cub Lake, and Long Lakes. The mine was decommissioned by Denison Mines in 1997.
Lancor Mine
Lancor Mine (also known as Lake Nordic Mine) is located on the south limb of the Quirke Lake syncline, four miles from Elliot Lake. It is located just north of Nordic Mine, and just east of Miliken Mine and just south of Stanleigh Mine. It was purchased by Northspan Uranium Mines, a subsidiary of Rio Tinto.
Diamond drilling started in 1954, which found ore. Two shafts were sunk and a processing plant with 3,800 tons per day was constructed. 1957 reports indicated an ore reserve of 8,289,207 tons that produced an average of 0.101% uranium oxide. Tailings were deposited in the valley east of the mill.
The mine closed in 1960 and was decommissioned from 1997 to 2000. 2.7 Mt of tailings remain on site
Miliken Lake Mine
Miliken Lake Mine is located approximately one mile from Elliot Lake. The site is bounded on the west and the south by Nordic Mine, and on the north by Stanleigh Mine and on the east by Lake Nordic Mine. The property was first staked in 1953 and purchased by Miliken Lake Uranium Mines Ltd in 1954, before being sold to Rio Tinto in 1956.
Production started in 1958; a 3,000-ton-per-day ore processing mill was constructed on site. A 1957 report indicated 7,269,846 tons of ore with an average grade of 0.098% uranium oxide on site, with possible an extra 14 to 18 million tons more.
Tailings were deposited in Crotch Lake and Sherriff Creek. The mine closed in 1964 and was decommissioned from 1997 to 2000. 0.08 Mt of tailings remains on site underwater.
Nordic Mine
Nordic Mine is located 3 miles east of Elliot Lake, it is bounded by the Quirke Mine to the north. It was first staked in 1953 by prospectors working for two companies: Technical Mine Consultants and Preston East Dome. Once uranium was discovered, the Algom Uranium Mines Company was formed, which had control over Nordic Mine and the Quirke Mine property.
A mining shaft was sunk in 1955 and production started in January 1957. There was a processing plant with a 3,000 tons per day capacity built on site. The mine was bought by Rio Tinto. 1958 estimates of ore reserves on site were of 11,258,000 tons with an average grade of uranium oxide of 2.65 pound per ton.
Tailings were deposited in the swamp and in the valley north of the mill where they remain with the tailings from Buckles Mine, covering a 115.6 hectares. The mine closed in 1968.
Panel Mine
The Panel Mine is located 13 miles north of Elliot Lake, on the north limb of the Quirke Lake syncline. The site is bordered to the west by the Quirke Mine and Denison Mine and on south by Can-Met Mine. The site was staked in 1953 by Emerald Glacier Mines Ltd purchased by Panel Consolidated Uranium Mines Ltd 1955, before being sold to Northspan Uranium Mines Limited, a Rio Tinto subsidiary.
Two shafts were sunk on site to depth of 1,102 and 1,250 feet and a processing plant with 3,000 tons per day capacity was built on site. Production started in 1958. A 1956 estimate of ore reserves on site was 6,033,000 tons with an average grade of 2.12 pounds of uranium oxide per ton.
Tailings were deposited in the nearby swamp and in the south west corner of Strike Lake.
The mine closed in 1961, but reopened in 1979 and operated until 1990. It was decommissioned from 1992 until 1996. 16 Mt of tailings remain on site underwater. The spillways of the dams that hold back the tailings were modified since closure.
Pronto Mine
Pronto Mine was the original mine in the Elliot Lake/Blind River area. Pronto Mine is located in Long Township, 11 miles east of Blind River, close to Ontario Highway 17 and the Canadian Pacific Railway.
It has a main shaft sunk that was deepened in 1958 and a ore processing plant with 1,250 tons per day capacity upgraded in 1958 to 1,500 tons per day. Tailings were deposited in the nearby valley and swamp north of the mill.
When the demand for uranium subsided, the mine switched to copper production, closing in 1970. 4.4 Mt of tailings remain on site covering 44.7 hectares, the tailing have vegetated cover.
Quirke Mine
Quire Mine was owned by Algom Uranium Mines Limited and is located 9 miles north of Elliot Lake, and about 2.5 miles west of the northwest edge of Quirke Lake. The property was first staked in 1953, trenching and sampling was also done the same year. A 864 feet deep shaft was started in 1954 and finished in 1955 and a processing mill with 3,000 tons per day capacity was built on site. Production started in 1956.
The company's 1957 annual report indicates 17,942,000 tons of ore reserves, of which 1,409,000 tons had an average grade of 2.31 pounds of uranium oxide per ton.
Tailings were deposited in Manred Lake, west of the mill.
The mine closed in 1961, but reopened in 1968 and operated until 1990. It was decommissioned from 1992 until 1996. The spillways of the dams that hold back the tailings were modified since closure. 46 Mt of tailings remain on site, in tiered underwater cells, covering an area of 183.5 hectares.
Spanish American Mine
The Spanish American Mine is located 9 miles northeast of Elliot Lake, on the north limb of the Quirke Lake trough. It is bounded on the east by Stanrock Mine and on the north by Denison Mine. The location was first staked by P Westerfield who sold the stake to Spanish American Mines Limited, who subsequently sold them to Northspan Uranium Mines Ltd, a Rio Tino subsidiary.
The site had two shafts that are 3,200 and 3,400 feet deep and a ore processing plant with 2,000 tons per day capacity. Production started in May 1958. A 1957 report estimated 6,251,726 tons of ore with an average grade of 0.097% uranium oxide. Tailings were deposited in Northspan Lake.
The mine closed in 1959 due to water ingress after only 79,000 tons of ore were extracted. It was decommissioned from 1992 to 1996. 0.5Mt of tailings remain on site, underwater covering 13.2 hectares.
Stanleigh Mine
Stanleigh Mine is located 2 miles northeast of Elliot Lake and was first staked by H. S. Strouth, the chief of mining of Standard Ore and Alloys Corporation, later Stanleigh Uranium Mining Corporation. Ownership was subsequently transferred to Miliken Lake Uranium Mines and Northspan Uranium Mines Limited (who owned Lacnor Mine).
Two shafts were started in April 1956 to a depth of 3,415 and 3,690 feet deep, the deepest of all shafts in the Elliot Lake group of mines. Tailings were deposited in Crotch Lake.
The mine closed in 1960, but reopened from 1983 until 1996. In August 1993, a power failures resulted in a 2 million liter spill of contaminated water from the mine into McCabe Lake. The Atomic Energy Control Board laid two charges against Rio Algom. It was decommissioned from 1997 until 2000. 20.5 Mt of tailings remain on site under water coving an area of 376.5 hectares.
In 2017, the Canadian Nuclear Safety Commission found owners Rio Algom to be operating the mine "below expectations" due to radium releases from the decommissioned mine's effluent treatment plant that exceeded allowable limits specified in the operators license.
Stanrock Mine
Stanrock Mine is located 14 miles from Elliot Lake on the south side of Quirke Lake. The site is adjacent to the Can-Met Mine to the east, the Spanish-American Mine to the west, and Denison Mine to the north. The site was initially known as the Z-7 group and owned by Zenmac Metal Mines Ltd, who sold it to the US Stancan Uranium Mines Limited in 1954. In 1995 and 1996 the new owners found uranium via diamond drilling and creating a processing plant with a 3,300 tons per day capacity. 1956 estimates of ore reserves were 5,077,800 tons with a grade of 0.109% uranium oxide with probably 4 million additional tons unconfirmed.
Tailings were deposited in the naturally occurring basin south of the mill, along with the tailings of Can-Met mine. Six million tons of tailings remain on site. The mine was decommissioned by Denison Mines in 1999.
Health
Pollution, environmental, and ecological health
The health of the watershed in the area deteriorated as mining started. Trout from nearby lakes released an odour when cooked and female fish stopped releasing eggs. Fishing remained permitted at both Quirke Lake and Whiskey Lake, despite the radioactivity in them exceeding levels deemed tolerable by the Ontario Waterways Commission. Terry Jacobs, an elder of the Serpent River First Nation, told Anishinabek News in 2022 that pollution from the mines reduced the number of animals in the area. Other community members reported sulphur fires, dangerous sulphuric dust burning roofs, breathing difficulties, and skin rashes on children who swam in the rivers. By 1976, 20 years after the start of mining, Health and Welfare Canada advised local residents to stop drinking water from local rivers. In 1987, band member Gertrude Lewis requested action from the Government of Canada to clean up the pollution, but the request was rejected.
Just before Canada Day 1988, the Serpent River nation transported waste from the mines to the TransCanada Highway. On July 20, 1988, the Government of Canada agreed to construct a treatment plant.
The 2022 book Serpent River Resurgence by Lianne C. Leddy documents the impacts of uranium mining on Serpent River First Nation.
102 million tonnes of tailings remain on eights decommissioned mines coving an area of 920 hectares. Rio Algom (a BHP subsidiary) and Denison Mines are both licensed by the Canadian Nuclear Safety Commission to operate the decommissioned mines.
Results from 2015 and 2018 independent environmental monitoring, commissioned by the Canadian Nuclear Safety Commission, report no expected environmental impacts. 2021 reports from the Serpent River First nation report the environmental damage as ongoing, with members unable to use their land or eat local fish.
There are twelve decommissioned uranium mines around Elliot lake, ten of which have tailings on site.
*Combined total for Nordic and Buckles
**Unknown or unclear
Cancer risks
According to a 2012 study published in Nature, there is a "positive exposure-response between silica and lung cancer".
Uranium mining around Elliot Lake produced silica-laden dust at a free silica rate of 60–70%.
By the early 1970s, miners were unionized via the United Steelworkers and were growing increasingly concerned about the prevalence of cancers and poor support for sick workers by mine owners.
In 1974 union representatives learned of learned about a paper presented by the Ontario Ministry of Health that contained details about cancer risks to uranium miners, that had not been shared with the miners.
Approximately 1,000 miners who worked at Denison Mine went on a wildcat strike on the 18 April 1974. Ten days later Denison Mines agreed to improve conditions and the Ontario Premier commissioned James Milton Ham to lead a Royal Commission on the Health and Safety of Workers in Mines.
The same year, the Ontario Workmen's Compensation Board studied 15,094 people who worked in the uranium mines around Elliot Lake and Bancroft for at least one month, between 1955 and 1974. Of those 15,094 people, 94 silicosis cases were found in 1974, of which 93 were attributed to working in an Elliot Lake mine.
According to the Committee on Uranium Mining in Virginia, mines produce radon gas which can increase lung cancer risks. Miners' exposure to radiation was not measured before 1958 and exposure limits were not enacted until 1968. Risks to miners were investigated and the official report of that investigation quotes an Elliot Lake miner:"We have been led to believe through the years that the working environment in these mines was safe for us to work in. We have been deceived."The aforementioned 1974 study of 15,094 Ontario uranium miners found 81 former miners who died of lung cancer. Factoring in predicted lung cancer rate for men in Ontario, led to the conclusion that by 1974 there were 36 more deaths than expected attributable to both Elliot Lake and Bancroft mines, with the additional risk appearing to be twice as high for Bancroft miners compared to Elliot Lake miners.
A study report for the CNSC undertaken by the Occupational Cancer Research Centre at Cancer Care Ontario tracked the health of 28,959 former uranium miners over 21 years and found a two-fold increase in lung cancer mortality and incidence. The BMJ (journal of the British Medical Association) reported an increase of lung cancer risk; miners who have worked at least 100 months in uranium mines have a twofold increased risk of developing lung cancer. The study is to be updated in 2023.
Between the minutes opening and 1980, there were 77 fatal workplace safety incidents in the Elliot Lake mines.
See also
Uranium mining in the Bancroft area (Ontario's other main uranium mining area)
Agnew Lake Mine (nearby uranium mine)
Uranium ore deposits
List of uranium mines
List of uranium mines in Ontario
List of mines in Ontario
References
Further reading
Lianne C. Leddy. Serpent River Resurgence: Confronting Uranium Mining at Elliot Lake. Toronto: University of Toronto Press, 2022.
External links
Report of the Royal Commission on the Health and Safety of Workers in Mines
Elliot Lake Nuclear Mining Museum
Denison Mines official website
BHP (owner of Rio Algom) official website
Uranium mining in Canada
Mining in Ontario
Former mines in Canada
History of mining in Canada
Mining and the environment
History of Canada (1945–1960)
History of Canada (1960–1981)
History of Canada (1982–1992)
Energy in Ontario
1950s in Ontario
1960s in Ontario
1970s in Ontario
1980s in Ontario
20th century in Ontario
Geology of Ontario
Economy of Canada
Environmental impact of nuclear power
Lung cancer
Nuclear power
Nuclear energy
Nuclear energy in Ontario
Elliot Lake | Uranium mining in the Elliot Lake area | [
"Physics",
"Chemistry",
"Technology"
] | 5,097 | [
"Nuclear power",
"Physical quantities",
"Power (physics)",
"Environmental impact of nuclear power",
"Nuclear energy",
"Nuclear physics",
"Radioactivity"
] |
70,589,301 | https://en.wikipedia.org/wiki/Carbide%20chloride | Carbide chlorides are mixed anion compounds containing chloride anions and anions consisting entirely of carbon. In these compounds there is no bond between chlorine and carbon. But there is a bond between a metal and carbon. Many of these compounds are cluster compounds, in which metal atoms encase a carbon core, with chlorine atoms surrounding the cluster. The chlorine may be shared between clusters to form polymers or layers. Most carbide chloride compounds contain rare earth elements. Some are known from group 4 elements. The hexatungsten carbon cluster can be oxidised and reduced, and so have different numbers of chlorine atoms included.
The carbide chlorides are a subset of the halide carbides, with related compounds including the carbide bromides, and carbide iodides. Cluster compounds similar to these carbides, may instead replace carbon with boron, hydrogen, nitrogen or phosphorus.
List
References
Carbides
Chlorides
Mixed anion compounds | Carbide chloride | [
"Physics",
"Chemistry"
] | 208 | [
"Matter",
"Chlorides",
"Inorganic compounds",
"Mixed anion compounds",
"Salts",
"Ions"
] |
54,904,665 | https://en.wikipedia.org/wiki/Natural%20bundle | In differential geometry, a field in mathematics, a natural bundle is any fiber bundle associated to the s-frame bundle for some . It turns out that its transition functions depend functionally on local changes of coordinates in the base manifold together with their partial derivatives up to order at most .
The concept of a natural bundle was introduced by Albert Nijenhuis as a modern reformulation of the classical concept of an arbitrary bundle of geometric objects.
Definition
Let denote the category of smooth manifolds and smooth maps and the category of smooth -dimensional manifolds and local diffeomorphisms. Consider also the category of fibred manifolds and bundle morphisms, and the functor associating to any fibred manifold its base manifold.
A natural bundle (or bundle functor) is a functor satisfying the following three properties:
, i.e. is a fibred manifold over , with projection denoted by ;
if is an open submanifold, with inclusion map , then coincides with , and is the inclusion ;
for any smooth map such that is a local diffeomorphism for every , then the function is smooth.
As a consequence of the first condition, one has a natural transformation .
Finite order natural bundles
A natural bundle is called of finite order if, for every local diffeomorphism and every point , the map depends only on the jet . Equivalently, for every local diffeomorphisms and every point , one hasNatural bundles of order coincide with the associated fibre bundles to the -th order frame bundles .
A classical result by Epstein and Thurston shows that all natural bundles have finite order.
Examples
An example of natural bundle (of first order) is the tangent bundle of a manifold .
Other examples include the cotangent bundles, the bundles of metrics of signature and the bundle of linear connections.
Notes
References
Differential geometry
Manifolds
Fiber bundles | Natural bundle | [
"Mathematics"
] | 384 | [
"Topological spaces",
"Topology",
"Manifolds",
"Space (mathematics)"
] |
66,192,010 | https://en.wikipedia.org/wiki/Topological%20Hochschild%20homology | In mathematics, Topological Hochschild homology is a topological refinement of Hochschild homology which rectifies some technical issues with computations in characteristic . For instance, if we consider the -algebra then but if we consider the ring structure on (as a divided power algebra structure) then there is a significant technical issue: if we set , so , and so on, we have from the resolution of as an algebra over , i.e. This calculation is further elaborated on the Hochschild homology page, but the key point is the pathological behavior of the ring structure on the Hochschild homology of . In contrast, the Topological Hochschild Homology ring has the isomorphism giving a less pathological theory. Moreover, this calculation forms the basis of many other THH calculations, such as for smooth algebras
Construction
Recall that the Eilenberg–MacLane spectrum can be embed ring objects in the derived category of the integers into ring spectrum over the ring spectrum of the stable homotopy group of spheres. This makes it possible to take a commutative ring and constructing a complex analogous to the Hochschild complex using the monoidal product in ring spectra, namely, acts formally like the derived tensor product over the integers. We define the Topological Hochschild complex of (which could be a commutative differential graded algebra, or just a commutative algebra) as the simplicial complex, pg 33-34 called the Bar complexof spectra (note that the arrows are incorrect because of Wikipedia formatting...). Because simplicial objects in spectra have a realization as a spectrum, we form the spectrumwhich has homotopy groups defining the topological Hochschild homology of the ring object .
See also
Revisiting THH(F_p)
Topological cyclic homology of the integers
Homological algebra
Algebraic topology | Topological Hochschild homology | [
"Mathematics"
] | 386 | [
"Mathematical structures",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Category theory",
"Homological algebra"
] |
66,192,375 | https://en.wikipedia.org/wiki/STAC-9 | STAC-9 is an experimental drug that was developed by GlaxoSmithKline as a small-molecule activator of the sirtuin subtype SIRT1, with potential applications in the treatment of diabetes.
See also
SRT-1460
SRT-1720
SRT-2104
SRT-2183
SRT-3025
References
Trifluoromethyl compounds
4-Pyridyl compounds
Amides
Pyrrolopyridines
Carboxamides | STAC-9 | [
"Chemistry"
] | 103 | [
"Pharmacology",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs",
"Amides"
] |
66,199,027 | https://en.wikipedia.org/wiki/GSK-4112 | GSK-4112 is an experimental drug that was developed by GlaxoSmithKline as an agonist of Rev-ErbAα. It is used for studying regulation of the circadian rhythm and its influence on diverse processes such as adipogenesis, regulation of bone density, and inflammation.
See also
SR8278
SR9009
SR9011
References
Thiophenes
Tert-butyl compounds
Nitroarenes
4-Chlorophenyl compounds
Amines
Esters | GSK-4112 | [
"Chemistry"
] | 104 | [
"Pharmacology",
"Esters",
"Functional groups",
"Medicinal chemistry stubs",
"Amines",
"Organic compounds",
"Pharmacology stubs",
"Bases (chemistry)"
] |
66,199,649 | https://en.wikipedia.org/wiki/Thiothionyl%20fluoride | Thiothionyl fluoride is a chemical compound of fluorine and sulfur, with the chemical formula . It is an isomer of disulfur difluoride (difluorodisulfane) .
Preparation
Thiothionyl fluoride can be obtained from the reaction between disulfur dichloride with potassium fluoride at about 150 °C or with mercury(II) fluoride at 20 °C.
Another possible preparation is by the reaction of nitrogen trifluoride with sulfur.
It also forms from disulfur difluoride when in contact with alkali metal fluorides.
can also be synthesized with the reaction of potassium fluorosulfite and disulfur dichloride:
Properties
Thiothionyl fluoride is a colorless gas. At high temperatures and pressures, it decomposes into sulfur tetrafluoride and sulfur.
With hydrogen fluoride, it forms sulfur tetrafluoride and hydrogen sulfide.
It condenses with sulfur difluoride at low temperatures to yield 1,3-difluoro-trisulfane-1,1-difluoride.
References
Sulfur compounds
Fluorides | Thiothionyl fluoride | [
"Chemistry"
] | 253 | [
"Fluorides",
"Salts"
] |
66,200,335 | https://en.wikipedia.org/wiki/Torin-1 | Torin_1 is a drug which was one of the first non-rapalog derived inhibitors of the mechanistic target of rapamycin (mTOR) subtypes mTORC1 and mTORC2. In animal studies it has anti-inflammatory, anti-cancer, and anti-aging properties, and shows activity against neuropathic pain.
References
Enzyme inhibitors | Torin-1 | [
"Chemistry"
] | 80 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
66,200,541 | https://en.wikipedia.org/wiki/Omzotirome | Omzotirome (INN), formerly codenamed TRC-150094, is a thyromimetic drug which acts as a metabolic modulator which restores metabolic flexibility. It has been shown to improve insulin resistance and hyperglycemia, and is in Phase III human clinical trials for the treatment of Cardiometabolic-Based Chronic Disease (CMBCD) by improving dysglycemia, dyslipidemia and hypertension.
References
Small-molecule drugs
Thyroid
Indenes
Experimental diabetes drugs
Pyrazoles
Carboxylic acids | Omzotirome | [
"Chemistry"
] | 118 | [
"Pharmacology",
"Carboxylic acids",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
58,106,874 | https://en.wikipedia.org/wiki/Catalina%20Curceanu | Cătălina Oana Curceanu is a Romanian physicist and lead researcher at the Istituto Nazionale di Fisica Nucleare. She researches low energy quantum chromodynamics.
Early life and education
Curceanu was born in Transylvania. She became interested in science as a child, and applied to the Mathematics and Physics Lyceum at Magurele in Bucharest. She attributes her passion for physics to her very skilled teachers. She studied physics at the University of Bucharest and graduated as a Valedictorian. She carried out her doctoral research using the Low Energy Antiproton Ring at CERN on the OBELIX experiment. She earned her PhD from the Horia Hulubei National Institute of Physics and Nuclear Engineering.
Research and career
In 1992 Curceanu joined the Istituto Nazionale di Fisica Nucleare. She uses the DAFNE (DAΦNE) collider at Frascati. She is part of the VIP2 experiment (Violation of the Pauli Principle) in the Laboratori Nazionali del Gran Sasso. In 2010 she was awarded Personality of the Year by the Romanian Academy in Rome. She works at CERN on the OBELIX experiment, looking for Exotic mesons, and DIRAC, looking for exotic pionium.
She published the popular science book Dai Buchi Neri all’adroterapia. Un Viaggio nella Fisica Moderna in 2013 with Springer. The book considers concepts of modern physics, including; the standard model, black holes and neutrinos. In 2015 she was awarded a $85,000 grant from FQXI and the John Templeton Foundations for her quantum physics research. Her proposal considered collapse models and the measurement problem. She used an ultrapure germanium detector to test the radiation it emits. Her recent work involves the SIDDHARTA experiment, looking at the strong interaction and strangeness.
Curceanu was the Australian Institute of Physics Women in Physics lecturer in 2016. In her lectures she asked "Quo Vadis the Universe'''". She has spoken about quantum computers at TEDx Brașov and TEDx Cluj-Napoca. She won the 2017 European Physical Society Emmy Noether Distinction for Women in Physics'' for her contributions to low-energy QCD. She won a Visiting International Scholar Award from the University of Wollongong in 2017, researching detector systems for high precision spectroscopy in fundamental physics. She is involved with several outreach and education activities.
References
Romanian physicists
Romanian women physicists
University of Bucharest alumni
Particle physicists
Living people
Year of birth missing (living people)
People associated with CERN | Catalina Curceanu | [
"Physics"
] | 538 | [
"Particle physicists",
"Particle physics"
] |
73,489,092 | https://en.wikipedia.org/wiki/False%20bottom%20%28sea%20ice%29 | False bottom is a form of sea ice that forms at the interface between meltwater and seawater via the process of double-diffusive convection of heat and salt.
Characteristics
False bottoms have been observed under drifting Arctic sea ice, under land-fast ice in Greenland, and at Ward Hunt Ice Shelf. Being located under ice, false bottoms are not easy to investigate, and the current observations are quite variable. For example, the areal coverage of false bottoms was 50% at the drifting station Charlie in 1959, 15% during SHEBA expedition in 1998 and 20% during MOSAiC expedition in 2020. Both physical modelling and in situ observations suggest that false bottoms may decrease sea ice melt up to 8%. Meanwhile, measurements from manual ice thickness gauges in Fram Strait in the summer of 2020 showed a nearly 50% reduction in bottom ice melt due to false bottoms. The salinity and temperature of under-ice meltwater and false bottoms are controlled by both ice melt and desalination. The salinity of false bottoms was 1.0 during the ARCTIC 91 expedition, 0.4 during SHEBA and 2.3 during MOSAiC. The average thickness of false bottoms was 20 cm during the ARCTIC 91 expedition, 15 cm during SHEBA, and 8 cm during MOSAiC. The presence of false bottoms can increase the rates of sea ice desalination.
Formation
During Arctic summer, snow and ice melt results in the accumulation of low-salinity meltwater. Most of this meltwater is transferred to the ocean, while some of it migrates to the surface melt ponds, the sea ice matrix, and under-ice meltwater layers. False bottoms form due to a substantial difference in freezing temperatures of water with different salinities. Their formation in summer was first documented by Fridtjof Nansen in 1897. During MOSAiC expedition, false bottoms occurred in areas of thin and ponded sea ice encircled by thicker sea ice ridges and were formed at the same time when surface melt ponds drained. False bottoms are formed at the upper part of the interface of meltwater and seawater. The ice crystals initially grow downwards towards seawater, and further grow horizontally until a formation of a horizontal ice layer. After the formation of this horizontal layer, false bottoms constantly migrate upwards due to conductive heat flux, supported by the temperature difference between meltwater and seawater, and the rate of such migration is mostly defined by its thickness. The growth and melt of false bottoms are controlled by the physical parameters of the ocean. False bottoms are often observed in areas of thin ice covered by surface melt ponds and encircled by thicker pressure ridges, with ridge draft limiting the depth of under-ice meltwater layers.
Under-ice meltwater layer
The false bottom formation is directly linked to the appearance of under-ice meltwater layers. The appearance of such meltwater layers often happens after surface melt pond drainage during the melt season. The depth of under-ice meltwater layers is usually limited by the draft of thicker and usually deformed ice, surrounding thinner ice with under-ice meltwater. The salinity of under-ice meltwater depends on the sources of meltwater including snow and ice, on the desalination of the ice above under-ice meltwater layers, and on the presence of false bottoms. During the MOSAiC expedition in Fram Strait, the average thickness of meltwater layers was 0.46 m under first-year ice and 0.26 m under second-year ice. The thickness of meltwater layers under multiyear ice during the SHEBA expedition in the Beaufort Sea was 0.35–0.47 m. Observations for fast multiyear ice in the Wandel Sea in North Greenland showed under-ice meltwater layers with 1.1–1.2 m thickness, later transformed into thick platelet ice layer with 0.01 m thick false bottoms under it.
Observation techniques
False bottoms may create errors in estimates of sea ice thickness from its draft measurements. They can be investigated manually using ice coring and drilling, hotwire thickness gauges or remotely using underwater sonars. Ground-based upward-looking sonar cannot distinguish "normal" or parental sea ice from false bottoms. Similarly, drifting buoys measuring sea-ice temperature (ice mass balance buoys) cannot accurately detect false bottoms but can identify thicker under-ice meltwater layers.
References
Sea ice
Cryosphere | False bottom (sea ice) | [
"Physics",
"Environmental_science"
] | 885 | [
"Physical phenomena",
"Earth phenomena",
"Hydrology",
"Sea ice",
"Cryosphere"
] |
73,493,450 | https://en.wikipedia.org/wiki/Active%20circulator | In electrical engineering, an active circulator is an active non-reciprocal three-port device that couples a microwave or radio-frequency signal only to an adjacent port in the direction of circulation. Other (external) circuitry connects to the circulator ports via transmission lines. An ideal three-port active circulator has the following scattering matrix:
An active circulator can be constructed using one of several different technologies. One early technology is the use of transistors as the active devices to perform the non-reciprocal function. Varactor circuits are another technology, relying on a time-varying transmission line structure, driven by a separate pump signal. A third technology utilizes spatiotemporally-modulated rings of coupled resonators. Another design approach relies on staggered commutation and integrated circuit techniques.
Compared to passive (ferrite) circulators, active circulators have the advantages of small size, low mass, and simple integration with other circuitry. System designers must weigh these factors with the disadvantages of active circulators: they require DC power and sometimes a separate pump or clock signal, they can be nonlinear, and can introduce significant noise into the signal path.
References
Electrical components | Active circulator | [
"Technology",
"Engineering"
] | 252 | [
"Electrical engineering",
"Electrical components",
"Components"
] |
74,859,016 | https://en.wikipedia.org/wiki/Coulomb%20drag | In condensed matter physics, Coulomb drag (also called electron drag or current drag) refers a transport phenomenon between two spatially isolated electrical conductors, where passing a steady electric current through one of them induces a voltage difference in the other. It is named after the Coulomb interaction between charge carriers (usually electrons) responsible for the effect.
The effect was first predicted by Soviet physicist M. B. Pogrebinsky in 1977. The first experimental verification of the phenomena was carried between 1991 and 1992 in two-dimensional electron gases by the group of James P. Eisenstein working with gallium arsenide (GaAs) double quantum wells.
In the presence of magnetic fields it leads to analogous phenomena, like the Hall drag or the magneto-Coulomb drag. When spin-polarized currents are involved, it is termed spin Coulomb drag.
Description
The phenomenon considers two spatially isolated layers. In between the two layers, there can be vacuum or an insulator. When an electric direct current is driven in the active layer, it drags carriers in the passive layer due to Coulomb interaction, this charge imbalance leads to a drag voltage VD induced in the passive layer. For ballistic conduction, it is expected that the resistance is RD to be proportional to the temperature squared . In a realistic system, the resistance dependence with temperature deviates from this regime due to the presence of phonons (low temperatures compared to the Fermi temperature TF), plasmons (high temperatures of the order of TF), disorder ( behaviour) and magnetic fields.
References
Mesoscopic physics | Coulomb drag | [
"Physics",
"Materials_science"
] | 332 | [
"Quantum mechanics",
"Mesoscopic physics",
"Condensed matter physics"
] |
74,861,460 | https://en.wikipedia.org/wiki/DESeq2 | DESeq2 is a software package in the field of bioinformatics and computational biology for the statistical programming language R. It is primarily employed for the analysis of high-throughput RNA sequencing (RNA-seq) data to identify differentially expressed genes between different experimental conditions. DESeq2 employs statistical methods to normalize and analyze RNA-seq data, making it a valuable tool for researchers studying gene expression patterns and regulation. It is available through the Bioconductor repository.
It was first presented in 2014. As of September 2023, its use has been cited over 30,000 times.
Features
One of the key steps in the analysis of RNA-seq data is data normalization. DESeq2 employs the "size factor" normalization method, which adjusts for differences in sequencing depth between samples. This normalization ensures that the expression values of genes are comparable across samples, allowing for accurate identification of differentially expressed genes. In addition to size factor normalization, DESeq2 also employs a variance-stabilizing transformation, which further enhances the quality of the data by stabilizing the variance across different expression levels. This combination of normalization techniques minimizes bias and improves the accuracy of differential expression analysis.
DESeq2 makes available negative binomial distribution models to account for the over-dispersion commonly observed in RNA-seq data. This modeling approach takes into consideration the variability that is not adequately explained by a simple Poisson distribution. By incorporating the negative binomial distribution, DESeq2 accurately models the dispersion of gene expression counts and provides more reliable estimates of differential expression.
DESeq2 also offers an adaptive shrinkage procedure, known as the "apeglm" method, which is particularly useful when dealing with small sample sizes. This technique effectively shrinks the log-fold changes of gene expression estimates, reducing the impact of extreme values and improving the stability of results. This is especially valuable for researchers working with limited biological replicates, as it helps to mitigate the problem of low statistical power.
Further, DESeq2 allows users to incorporate relevant covariates into their analyses. This feature enables researchers to account for potential confounding factors, such as batch effects or experimental conditions, that can influence gene expression. By including covariates in the analysis, DESeq2 offers a more accurate assessment of the true differential expression patterns in the data.
Use
DESeq2 is interfaced through R, via the bioconductor repository. The repository provides comprehensive documentation and tutorials, making it accessible to a wide range of researchers.
References
Applied statistical analysis
Software using the GNU Lesser General Public License
R scientific libraries
RNA sequencing
Cross-platform free software
Free software for Linux
Free software for Windows
Free software for macOS
Bioinformatics software | DESeq2 | [
"Chemistry",
"Biology"
] | 580 | [
"Genetics techniques",
"Bioinformatics software",
"Bioinformatics",
"RNA sequencing",
"Molecular biology techniques"
] |
62,464,531 | https://en.wikipedia.org/wiki/Phylogenetic%20classification%20of%20bony%20fishes | The phylogenetic classification of bony fishes is a phylogenetic classification of bony fishes and is based on phylogenies inferred using molecular and genomic data for nearly 2000 fishes. The first version was published in 2013 and resolved 66 orders. The latest version (version 4) was published in 2017 and recognised 72 orders and 79 suborders.
Phylogeny
The following cladograms show the phylogeny of the Osteichthyes down to order level, with the number of families in parentheses.
The 43 orders of spiny-rayed fishes are related as follows:
References
External links
www.deepfin.org - Phylogeny of all Fishes (redirects to https://sites.google.com/site/guilleorti/home)
Phylogenetics
Bony fish | Phylogenetic classification of bony fishes | [
"Biology"
] | 166 | [
"Bioinformatics",
"Phylogenetics",
"Taxonomy (biology)"
] |
69,103,946 | https://en.wikipedia.org/wiki/Warburg%E2%80%93Christian%20method | The Warburg–Christian method is an ultraviolet spectroscopic protein and nucleic acid assay method based on the absorbance of UV light at 260 nm and 280 nm wavelengths. Proteins generally absorb light at 280 nanometers due to the presence of tryptophan and tyrosine. Nucleic acids absorb more at 260 nm, primarily due to purine and pyrimidine bases. The Warburg–Christian method combines measurements at these wavelengths to estimate the amounts of protein and nucleic acid present. Original description of the method appeared in 1941.
The method is named for its creators, the German cancer researcher Otto Heinrich Warburg, Nobel Prize winner, and his employee Walter Christian of the Kaiser Wilhelm Institute for Biology in Berlin.
References
Protein methods
Analytical chemistry
Chemical tests | Warburg–Christian method | [
"Chemistry",
"Biology"
] | 157 | [
"Biochemistry methods",
"Protein methods",
"Protein biochemistry",
"Chemical tests",
"nan",
"Analytical chemistry stubs"
] |
56,413,547 | https://en.wikipedia.org/wiki/Ruth%20Baker | Ruth Elizabeth Baker is a British applied mathematician and mathematical biologist at the University of Oxford whose research interests include pattern formation, morphogenesis, and the mathematical modeling of cell biology and developmental biology.
Education and career
Baker read mathematics at Wadham College, Oxford, and earned a doctorate (D.Phil.) at the University of Oxford in 2005. Her dissertation, Periodic Pattern Formation in Developmental Biology: A Study of the Mechanisms Involved in Somite Formation, was jointly supervised by biologist Santiago Schnell and mathematician Philip Maini, who was also the doctoral supervisor of Schnell.
After postdoctoral research in Germany, the US, and Australia, funded by a UK Research Council Junior Research Fellowship, she returned to a permanent position at Oxford. She is a professor of applied mathematics at the Mathematical Institute of the University of Oxford and a tutorial fellow in mathematics in St Hugh's College, Oxford since 2010.
Recognition
Baker was a 2014 winner of the Whitehead Prize of the London Mathematical Society "for her outstanding contributions to the field of Mathematical Biology". She was awarded a Leverhulme Research Fellowship for her work in "efficient computational methods for testing biological hypotheses" in 2017.
References
External links
Home page
Year of birth missing (living people)
Living people
British mathematicians
British women mathematicians
British applied mathematicians
Theoretical biologists
Alumni of Wadham College, Oxford
Alumni of the University of Oxford
Academics of the University of Oxford
Fellows of St Hugh's College, Oxford | Ruth Baker | [
"Biology"
] | 294 | [
"Bioinformatics",
"Theoretical biologists"
] |
56,415,007 | https://en.wikipedia.org/wiki/Shaping%20processes%20in%20crystal%20growth | Shaping processes in crystal growth are a collection of techniques for growing bulk crystals of a defined shape from a melt, usually by constraining the shape of the liquid meniscus by means of a mechanical shaper. Crystals are commonly grown as fibers, solid cylinders, hollow cylinders (or tubes), and sheets (or plates). More complex shapes such as tubes with a complex cross section, and domes have also been produced. Using a shaping process can produce a near net shape crystal and reduce the manufacturing cost for crystals which are composed of very expensive or difficult to machine materials.
List of shaping processes
Horizontal Ribbon Growth (HRG, 1959)
Edge-defined Film-fed Growth (EFG, 1960)
Low Angle Silicon Sheet (LASS, 1981)
Micro-pulling-down (μ-PD)
Stepanov technique
String ribbon
Edge-defined film-fed growth
Edge-defined film-fed growth or EFG was developed for sapphire growth in the late 1960s by Harold LaBelle and A. Mlavsky at Tyco Industries.
A shaper (also referred to as a die) having dimensions approximately equal to the crystal to be grown rests above the surface of the melt which is contained in a crucible. Capillary action feeds liquid material to a slit at the center of the shaper. When a seed crystal is touched to the liquid film and raised upwards, a single crystal forms at the interface between the solid seed and the liquid film. By continuing to pull the seed upwards, the crystal expands as a liquid film forms between the crystal and the top surface of the shaper. When the film reaches the edges of the shaper, the final crystal shape matches that of the shaper.
The exact dimensions of the crystal will deviate from the dimensions of the shaper because every material has a characteristic growth angle, the angle formed at the triple interface between the solid crystal, liquid film, and the atmosphere. Because of the growth angle, varying the height of the meniscus (i.e. the thickness of the liquid film) will change the dimensions of the crystal. The meniscus height is affected by pulling speed and crystallization rate. The crystallization rate depends on the temperature gradient above the shaper, which is determined by the configuration of the hot-zone of the crystal growth furnace, and the power applied to the heating elements during growth. The difference in thermal expansion coefficients between the shaper material and the crystal material can also cause appreciable size differences between the shaper and the crystal at room temperature for crystals grown at high temperatures.
The shaper material should be non-reactive with both the melt and growth atmosphere, and should be wet by the melt.
It is possible to grow multiple crystals from a single crucible using the EFG technique, for example by growing many parallel sheets.
Applications
Sapphire: EFG is used to grow large plates of sapphire, primarily for use as robust infrared windows for defense and other applications. Windows about 7 mm thick x 300 mm wide x 500 mm long are produced. The shaper is typically made from molybdenum.
Silicon: EFG was used in the 2000s by Schott Solar to produce silicon sheets for solar photovoltaic panels, by pulling a thin-walled (~250-300 μm) octagon with faces 12.5 cm on a side and diameter about 38 cm, about 5–6 m long. The shaper is typically made from graphite.
Other oxides: Many high melting-point oxides have been grown by EFG, among them Ga2O3, LiNbO3, and Nd3+:(LuxGd1−x)3Ga5O12 (Nd:LGGG).
Often an iridium shaper is used.
Horizontal ribbon growth
Horizontal ribbon growth or HRG is a method developed and patented by William Shockley in 1959 for silicon growth. By this method a thin crystalline sheet is pulled horizontally from the top of a crucible. The melt level must be constantly replenished in order to keep the surface of the melt at the same height as the edge of the crucible from which the sheet is being pulled. By blowing a cooling gas at the surface of the growing sheet, very high growth rates (>400 mm/min) can be achieved. The method relies on the solid crystal floating on the surface of the melt, which works because solid silicon is less dense than liquid silicon.
Micro-pulling-down
The micro-pulling-down or μ-PD technique uses a small round opening in the bottom of the crucible to pull a crystalline fiber downward. Hundreds of different crystalline materials have been grown by this technique.
A variation called pendant drop growth or PDG uses a slot in the bottom of the crucible to produce crystalline sheets in a similar manner.
Stepanov technique
The Stepanov technique was developed by A.V. Stepanov in the Soviet Union after 1950. The method involves pulling a crystal vertically through a shaper located at the surface of the melt. The shaper is not necessarily fed by a capillary channel as in EFG. The shaper material may be wetted or non-wetted by the melt, as opposed to EFG where the shaper material is wetted. The technique has been used to grow metal, semiconductor, and oxide crystals.
Czochralski growth using a floating shaper known as a "coracle" was done for some III-V semiconductors prior to the development of advanced control-systems for diameter control.
String ribbon
The string ribbon method, also known as dendritic web or edge-supported pulling, has been used to grow semiconductor sheets including indium antimonide, gallium arsenide, germanium, and silicon.
A seed crystal with the width and thickness matching the sheet to be grown is dipped into the top surface of the melt. Strings of a suitable material are fixed to the vertical edges of the seed and extend down through holes in the bottom of the crucible to a spool. As the seed is raised, string is continuously fed through the melt and a liquid film forms between the seed, the strings, and the melt. The film crystallizes to the seed, forming a sheet or ribbon.
References
Semiconductor growth
Industrial processes
Crystals | Shaping processes in crystal growth | [
"Chemistry",
"Materials_science"
] | 1,281 | [
"Crystallography",
"Crystals"
] |
56,416,473 | https://en.wikipedia.org/wiki/Sulfamoyl%20fluoride | In organic chemistry, sulfamoyl fluoride is an organic compound having the chemical formula F−SO2−N(−R1)−R2. Its derivatives are called sulfamoyl fluorides.
Examples of sulfamoyl fluorides include:
Sulfamoyl fluorides are contrasted with the sulfonimidoyl fluorides with structure R1-S(O)(F)=N-R2.
Production
Sulfamoyl fluorides can be made by treating secondary amines with sulfuryl fluoride (SO2F2) or sulfuryl chloride fluoride (SO2ClF). Cyclic secondary amines work as well, provided they are not aromatic.
Sulfamoyl fluorides can also be made from sulfamoyl chlorides, by reacting with a substance that can supply the fluoride ion, such as NaF, KF, HF, or SbF3.
Sulfonamides can undergo a Hofmann rearrangement when treated with a difluoro-λ3-bromane to yield a singly substituted N-sulfamoyl fluoride.
See also
Fluorosulfonate
Sulfonyl halide
Sulfuryl fluoride
References
Functional groups
Leaving groups | Sulfamoyl fluoride | [
"Chemistry"
] | 269 | [
"Functional groups",
"Organic chemistry stubs",
"Leaving groups"
] |
77,819,408 | https://en.wikipedia.org/wiki/Camlipixant | Camlipixant is an investigational new drug that is being evaluated for the treatment of chronic cough. It is a P2X3 receptor antagonist.
See also
gefapixant
References
Benzamides
Carbamates
Morpholines
Imidazopyridines | Camlipixant | [
"Chemistry"
] | 57 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
77,821,098 | https://en.wikipedia.org/wiki/Ceralasertib | Ceralasertib is an investigational new drug that is being evaluated for the treatment of cancer. It is an ATR kinase inhibitor.
References
Cyclopropanes
Morpholines
Pyrimidines
Pyrrolopyridines
Sulfoximines | Ceralasertib | [
"Chemistry"
] | 59 | [
"Pharmacology",
"Sulfoximines",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
77,823,491 | https://en.wikipedia.org/wiki/4-Aminobutanal | 4-Aminobutanal, also known as γ-aminobutyraldehyde, 4-aminobutyraldehyde, or GABA aldehyde, is a metabolite of putrescine and a biological precursor of γ-aminobutyric acid (GABA). It can be converted into GABA by the actions of diamine oxidase (DAO) and aminobutyraldehyde dehydrogenase (ABALDH) (e.g., ALDH9A1). Putrescine is converted into 4-aminobutanal via monoamine oxidase B (MAO-B). However, biosynthesis of GABA from polyamines like putrescine is a minor metabolic pathway in the brain.
The related compound γ-hydroxybutyraldehyde (GHBAL) is a prodrug of γ-hydroxybutyric acid (GHB) as well as a metabolic intermediate in the conversion of 1,4-butanediol (1,4-BD) into GHB. However, aliphatic aldehydes like GHBAL are caustic, strong-smelling, and foul-tasting, and ingestion is likely to be unpleasant and result in severe nausea and vomiting.
See also
N-Acetyl-γ-aminobutyraldehyde (N-acetyl-GABAL)
References
Aldehydes
Amines
GABA analogues
Neurotransmitter precursors
Prodrugs | 4-Aminobutanal | [
"Chemistry"
] | 322 | [
"Functional groups",
"Prodrugs",
"Chemicals in medicine",
"Amines",
"Bases (chemistry)"
] |
77,824,235 | https://en.wikipedia.org/wiki/Neurotransmitter%20prodrug | A neurotransmitter prodrug, or neurotransmitter precursor, is a drug that acts as a prodrug of a neurotransmitter. A variety of neurotransmitter prodrugs have been developed and used in medicine. They can be useful when the neurotransmitter itself is not suitable for use as a pharmaceutical drug owing to unfavorable pharmacokinetic or physicochemical properties, for instance high susceptibility to metabolism, short elimination half-life, or lack of blood–brain barrier permeability. Besides their use in medicine, neurotransmitter prodrugs have also been used as recreational drugs in some cases.
Monoamine prodrugs
Monoamine neurotransmitter prodrugs include the catecholamine precursors and prodrugs L-phenylalanine, L-tyrosine, L-DOPA (levodopa), L-DOPS (droxidopa), and dipivefrine (O,O'-dipivalylepinephrine), as well as the serotonin and melatonin precursors and prodrugs L-tryptophan and L-5-hydroxytryptophan (5-HTP; oxitriptan). Other dopamine prodrugs, including etilevodopa, foslevodopa, melevodopa, XP-21279, DopAmide, DA-Phen, O,O'-diacetyldopamine, O,O'-dipivaloyldopamine, docarpamine, gludopa, and gludopamine, have also been developed. Dopamantine (N-adamantanoyl dopamine) is another possible attempt at a dopamine prodrug. Other serotonin prodrugs have been developed as well, such as the renally-selective L-glutamyl-5-hydroxy-L-tryptophan (glu-5-HTP).
5-HTP is additionally a prodrug of N-methylated tryptamine psychedelic trace amines, such as N-methylserotonin (NMS; norbufotenin) and bufotenin (5-hydroxy-N,N-dimethyltryptamine; 5-HO-DMT). The same is also true of L-tryptophan, which is transformed into tryptamine as well as into N-methyltryptamine (NMT) and N,N-dimethyltryptamine (N,N-DMT). Dependent on these transformations, both tryptophan and 5-HTP produce the head-twitch response (HTR), a behavioral proxy of psychedelic effects, at sufficiently high doses in animals. O-Acetylbufotenine and O-pivalylbufotenine are thought to be centrally active prodrugs of the peripherally selective bufotenin.
Although they are not endogenous neurotransmitter prodrugs, "false" or "substitute" neurotransmitter prodrugs, such as α-methyltryptophan and α-methyl-5-hydroxytryptophan (which are prodrugs of α-methylserotonin, a substitute neurotransmitter of serotonin), have also been developed. Analogously, ibopamine and fosopamine are prodrugs of epinine (N-methyldopamine; deoxyepinephrine).
GABA prodrugs
γ-Aminobutyric acid (GABA) prodrugs include progabide and tolgabide. Picamilon has been claimed to be a prodrug of GABA, but has not actually been demonstrated to be converted into GABA. Pivagabine was once thought to be a prodrug of GABA, but this proved not to be the case.
4-Amino-1-butanol is known to be converted into GABA through the actions of aldehyde reductase (ALR) and aldehyde dehydrogenase (ALDH). 4-Amino-1-butanol is to GABA as 1,4-butanediol (4-hydroxy-1-butanol; 1,4-BD) is to γ-hydroxybutyric acid (GHB) (with 1,4-BD being a well-known prodrug of GHB). The metabolic intermediate γ-aminobutyraldehyde (GABAL) is also converted into GABA.
GHB prodrugs
A number of γ-hydroxybutyric acid (GHB) prodrugs are known. These include 1,4-butanediol (1,4-BD) and γ-butyrolactone (GBL), as well as the metabolic intermediate γ-hydroxybutyraldehyde (GHBAL).
Acetylcholine prodrugs
Acetylcholine precursors and prodrugs like choline, phosphatidylcholine (lecithin), citicoline (CDP-choline), and choline alphoscerate (α-GPC) are known and have been researched.
References
Neurotransmitter precursors
Neurotransmitters
Prodrugs | Neurotransmitter prodrug | [
"Chemistry"
] | 1,188 | [
"Neurochemistry",
"Neurotransmitters",
"Prodrugs",
"Chemicals in medicine"
] |
77,829,349 | https://en.wikipedia.org/wiki/Kitaev%20chain | In condensed matter physics, the Kitaev chain is a simplified model for a topological superconductor. It models a one dimensional lattice featuring Majorana bound states. The Kitaev chain have been used as a toy model of semiconductor nanowires for the development of topological quantum computers. The model was first proposed by Alexei Kitaev in 2000.
Description
Hamiltonian
The tight binding Hamiltonian in of a Kitaev chain considers a one dimensional lattice with N site and spinless particles at zero temperature, subjected to nearest neighbour hoping interactions, given in second quantization formalism as
where is the chemical potential, are creation and annihilation operators, the energy needed for a particle to hop from one location of the lattice to another, is the induced superconducting gap (p-wave pairing) and is the coherent superconducting phase. This Hamiltonian has particle-hole symmetry, as well as time reversal symmetry.
The Hamiltonian can be rewritten using Majorana operators, given by
,
which can be thought as the real and imaginary parts of the creation operator . These Majorana operator are Hermitian operators, and anticommutate,
.
Using these operators the Hamiltonian can be rewritten as
where .
Trivial phase
In the limit , we obtain the following Hamiltonian
where the Majorana operators are coupled on the same site. This condition is considered a topologically trivial phase.
Non-trivial phase
In the limit and , we obtain the following Hamiltonian
where every Majorana operator is coupled to a Majorana operator of a different kind in the next site. By assigning a new fermion operator , the Hamiltonian is diagonalized, as
which describes a new set of N-1 Bogoliubov quasiparticles with energy t. The missing mode given by couples the Majorana operators from the two endpoints of the chain, as this mode does not appear in the Hamiltonian, it requires zero energy. This mode is called a Majorana zero mode and is highly delocalized. As the presence of this mode does not change the total energy, the ground state is two-fold degenerate. This condition is a topological superconducting non-trivial phase.
The existence of the Majorana zero mode is topologically protected from small perturbation due to symmetry considerations. For the Kitaev chain the Majorana zero mode persist as long as and the superconducting gap is finite. The robustness of these modes makes it a candidate for qubits as a basis for topological quantum computer.
Bulk case
Using Bogoliubov-de Gennes formalism it can be shown that for the bulk case (infinite number of sites), that the energy yields
,
and it is gapped, except for the case and wave vector . For the bulk case there are no zero modes. However a topological invariant exists given by
,
where is the Pfaffian operation. For , the invariant is strictly and for , corresponding to the trivial and non-trivial phases from the finite chain, respectively. This relation between the topological invariant from the bulk case and the existence of Majorana zero modes in the finite case is called a bulk-edge correspondence.
Experimental efforts
One possible realization of Kitaev chains is using semiconductor nanowires with strong spin–orbit interaction to break spin-degeneracy, like indium antimonide or indium arsenide. A magnetic field can be applied to induce Zeeman coupling to spin polarize the wire and break Kramers degeneracy. The superconducting gap can be induced using Andreev reflection, by putting the wire in the proximity to a superconductor. Realizations using 3D topological insulators have also been proposed.
There is no single definitive way to test for Majorana zero modes. One proposal to experimentally observe these modes is using scanning tunneling microscopy. A zero bias peak in the conductance could be the signature of a topological phase. Josephson effect between two wires in superconducting phase could also help to demonstrate these modes.
In 2023 QuTech team from Delft University of Technology reported the realization of a "poor man's" Majorana that is a Majorana bound state that is not topologically protected and therefore only stable for a very small range of parameters. It was obtained in a Kitaev chain consisting of two quantum dots in a superconducting nanowire strongly coupled by normal tunneling and Andreev tunneling with the state arising when the rate of both. Some researches have raised concerns, suggesting that an alternative mechanism to that of Majorana bound states might explain the data obtained.
See also
Su–Schrieffer–Heeger chain
References
Condensed matter physics
Superconductivity | Kitaev chain | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 963 | [
"Electrical resistance and conductance",
"Physical quantities",
"Superconductivity",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Matter"
] |
72,085,266 | https://en.wikipedia.org/wiki/Control%20coefficient%20%28biochemistry%29 | In biochemistry, control coefficients are used to describe how much influence a given reaction step has on the flux or concentration of the species at steady state. This can be accomplished experimentally by changing the expression level of a given enzyme and measuring the resulting changes in flux and metabolite levels. In theory, any observables, such as growth rate, or even combinations of observables, can be defined using a control coefficient; but flux and concentration control coefficients are by far the most commonly used.
The simplest way to look at control coefficients is as the scaled derivatives of the steady-state change in an observable with respect to a change in enzyme activity ( for each species ). For example, the flux control coefficients (, where is the reaction rate) can be written as:
while the concentration control coefficients (, where is the concentration of species ) can be written as:
The approximation in terms of percentages makes control coefficients easier to measure and more intuitively understandable.
Control coefficients can have both negative and positive values. A negative value indicates that the observable in question decreases as a result of the change in enzyme activity.
It is important to note that control coefficients are not fixed values but will change depending on the state of the pathway or organism. If an organism shifts to a new nutritional source, then the control coefficients in the pathway will change. As such, control coefficients form a central component of metabolic control analysis.
Formal Definition
One criticism of the concept of the control coefficient as defined above is that it is dependent on being described relative to a change in enzyme activity. Instead, the Berlin school defined control coefficients in terms of changes to local rates brought about by any suitable parameter, which could include changes to enzyme levels or the action of drugs. Hence a more general definition is given by the following expressions:
and concentration control coefficients by
In the above expression, could be any convenient parameter. For example, a drug, changes in enzyme expression etc. The advantage is that the control coefficient becomes independent of the applied perturbation. For control coefficients defined in terms of changes in enzyme expression, it is often assumed that the effect on the local rate by changes to the enzyme activity is proportional so that:
Relationship to rate-limiting steps
In normal usage, the rate-limiting step or rate-determining step is defined as the slowest step of a chemical reaction that determines the speed (rate) at which the overall reaction proceeds. The flux control coefficients do not measure this kind of rate-limitingness. For example, in a linear chain of reactions at steady-state, all steps carry the same flux. That is, there is no slow or fast step with respect to the rate or speed of a reaction. The flux control coefficient, instead, measures how much influence a given step has on the steady-state flux. A step with a high flux control coefficient means that changing the activity of the step (by changing the expression level of the enzyme) will have a large effect on the steady-state flux through the pathway and vice versa.
Historically the concept of the rate-limiting steps was also related to the notion of the master step. However, this drew much criticism due to a misunderstanding of the concept of the steady-state.
See also
Elasticity coefficient
Metabolic control analysis
Summation theorems (biochemistry)
References
Biochemistry methods
Metabolism
Mathematical and theoretical biology
Systems biology | Control coefficient (biochemistry) | [
"Chemistry",
"Mathematics",
"Biology"
] | 679 | [
"Biochemistry methods",
"Mathematical and theoretical biology",
"Applied mathematics",
"Cellular processes",
"Biochemistry",
"Metabolism",
"Systems biology"
] |
72,087,465 | https://en.wikipedia.org/wiki/Te%20lapa | Te lapa is a Polynesian term for an unexplained light phenomenon underneath, or on the surface of, the ocean. Te lapa has been loosely translated as "flashing light", "underwater lightning", "the flashing", or "something that flashes". It was used by historic and modern Polynesians as a navigation aid to find islands in the Pacific Ocean. In some instances, it has been theorized to be bioluminescence or electromagnetic in nature. Other hypotheses include the interference patterns of intersecting waves creating a raised curve acting as a lens, but would not explain the source of light. David Lewis speculated that te lapa may originate from luminescence of organisms, or related to deep swell, ground swell, or backwash waves from reefs or islands.
History
Te lapa was brought to the attention of academia by David Lewis with the publication of his book We, the Navigators in 1972. The book dispelled the former academic belief that Polynesians colonized the islands haphazardly by drifting and without navigational aids. Lewis documented many non-instrumental methods used for navigation, most explainable by science except for te lapa. Later on in 1993, Marianne George would voyage with Lewis and together worked with Kaveia, a native of Taumako, to define the origin and nature of te lapa.
Eventually George would witness te lapa on several occasions with help from Kaveia. She described it as a natural phenomenon and used for piloting, best seen at night. The light is followed toward its origin from islands, or to reorient boat pilots at sea. Kaveia noted that te lapa is used for navigation no more than 120 miles from shore, and rarely as close as 2 miles from shore due to the island already being visible from that distance. It is typically white in color, though its color may be dependent upon the makeup of the water. It was also described as having the shape of a straight line. Lewis, who had also seen the lights, described it as "streaking", "flickering", "flashes", "darts", "bolts", or "glowing plaques" but never as jagged, like lightning. Lewis noted that te lapa would travel slower farther out at sea, and faster when closer to shore, often having a "rapid to-and-fro jerking character." Lewis was instructed by Bongi, a native of Matema atoll, that te lapa was best seen 80 to 100 miles from shore.
Other Polynesian cultures are likely to have different names for the same phenomenon. On the island of Nikunau it is referred to as "te mata" and "ulo aetahi" (Glory of the Seas). On Tonga, "ulo aetahi" may be "ulo a'e tahi" and have other names such as "te tapa" translated as "to burst forth with light." Lewis noted that Tikopians were unaware of te lapa.
George, having been to sea many times, had seen many "ocean lights" from known sources, ruled out what te lapa was not. Ruled-out phenomena include: ball lightning, tektites, bioluminescence, luminescence, St. Elmo's fire, shooting stars/meteors, satellites, comets, unique colors visible at sunset or when the sun is occluded, celestial bodies, military firing ranges, fishing and military buoys, ice mirages, light mirroring, rainbows, glories, crepuscular rays, sun dogs, moon dogs, iceblink, looming from clouds, aurorae, asterisms, earthquake lights, and a large range of light shadowing, fractured lights, color, and mirage arcs from light phenomenon above 60° latitude. George also mentioned that Kaveia interpreted other known and explained phenomena, as well as other unexplained phenomena such as "Te Akua" also known as "the devil lights".
Skepticism
Richard Feinberg, a Kent State University professor, has, however, claimed that the phenomenon has not been scientifically written about, that there are few references to it, and that there are disagreements among sailors about how the phenomenon operates. Still, Feinberg interviewed sailors who believed in te lapa and said that they used it to navigate. He concluded his publication on te lapa with the remark "[a]lthough I am not quite ready dismiss te lapa out of hand, it is hard to see how a phenomenon so rare and difficult to find could be a dependable navigational tool, particularly in an emergency situation—precisely when it would be needed."
References
Further reading
Light
Unexplained phenomena
Polynesian navigation
Polynesian words and phrases | Te lapa | [
"Physics"
] | 986 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Waves",
"Light"
] |
67,686,875 | https://en.wikipedia.org/wiki/Baroclinic%20instabilities%20in%20the%20ocean | A baroclinic instability is a fluid dynamical instability of fundamental importance in the atmosphere and ocean. It can lead to the formation of transient mesoscale eddies, with a horizontal scale of 10-100 km. In contrast, flows on the largest scale in the ocean are described as ocean currents, the largest scale eddies are mostly created by shearing of two ocean currents and static mesoscale eddies are formed by the flow around an obstacle (as seen in the animation on eddy (fluid dynamics). Mesoscale eddies are circular currents with swirling motion and account for approximately 90% of the ocean's total kinetic energy. Therefore, they are key in mixing and transport of for example heat, salt and nutrients.
In a baroclinic medium, the density depends on both the temperature and pressure. The effect of the temperature on the density allows lines of equal density (isopycnals) and lines of equal pressure (isobars) to intersect. This is in contrast to a barotropic fluid, in which the density is only a function of pressure. For this barotropic case, isobars and isopycnals are parallel. The intersecting of isobars and isopycnals in a baroclinic medium may cause baroclinic instabilities to occur by the process of sloping convection. The sizes of baroclinic instabilities and therefore also the eddies they create scale with the Rossby radius of deformation, which strongly varies with latitude for the ocean.
Instability and eddy generation
In a baroclinic fluid, the thermal-wind balance holds, which is a combination of the geostrophic balance and the hydrostatic balance. This implies that isopycnals can slope with respect to the isobars. Furthermore, this also results in changing horizontal velocities with height as a result of horizontal temperature and therefore density gradients.
Under the thermal-wind balance, geostrophic balance and hydrostatic balance, a flow is in equilibrium. However, this is not the equilibrium of least energy. A reduction in slope of the isopycnals would lower the center of gravity and therefore also the potential energy. It would also reduce the pressure gradient, leading to an increase in the kinetic energy. However, under the thermal-wind balance, a decrease in slope of the isopycnals cannot occur spontaneously. It requires a change of potential vorticity. Under certain conditions, slight perturbations of the equilibrium under the thermal-wind balance may increase, leading to larger perturbations from the initial state and thus the growth of an instability.
It is often considered that baroclinic instability is the mechanism which extracts potential energy stored in horizontal density gradients and uses this "eddy potential energy" to drive eddies.
Sloping convection
These baroclinic instabilities may be initiated by the process of 'sloping convection' or 'slanted thermal convection'. To understand this, consider a fluid in steady state and under the thermal-wind balance. Initially, a fluid parcel is at location A. The fluid parcel is slightly perturbed to location B, while still retaining its original density. Therefore, the fluid parcel is now in a location with a lower density than itself and the parcel will just sink down to its original position; the fluid parcel is now stable. However, when a parcel displaced to location C, it is surrounded by fluid with a higher density than the parcel itself. Due to its relatively low density with respect to its surroundings, the parcel will float up even further. Now a small perturbation grows into a larger one, which implies a baroclinic instability.
A criterion for an instability to occur can be defined. As stated before, in a baroclinic fluid, the thermal-wind balance holds, which implies the following two relations:
and ,
where is the density and , and are the spatial coordinates in the horizontal (latitudinal and longitudinal) and vertical direction, respectively. and represent the horizontal (zonal and meridional) components of the velocity vector in the - and -direction, respectively. Now thus and are the two horizontal density gradients. is the gravitational acceleration at the surface of the Earth and the Coriolis parameter.
Therefore a horizontal density gradient in the -direction leads to a gradient in horizontal flow velocity over depth .
The slope of the displacement is defined as
,
where and are the horizontal and vertical velocities of the perturbation, respectively.
An instability now occurs when the slope of the displacement is smaller than the slope of the isopycnals. The isopycnals can be mathematically described as . Now this results in an instability when:
From now on, only a two layer system with and the slopes of the top and bottom layer, respectively, is considered to simplify the problem. This is now similar to the classic Philips model. From the thermal-wind balance it now follows that
where is the reduced gravity and the Coriolis-parameter at the equator according to the beta-plane approximation.
Performing a scale analysis on the slope of the perturbation allows to assign physical quantities to this mathematical problem. This now results in
,
where is the scale height, the horizontal length scale, and is the Rossby-parameter.
From this it can be stated that an instability occurs when
or ,
where is the reduced gravity and is the velocity difference between the lower and upper layer. This criterion can be used to identify whether a small perturbation will grow into a larger one and thus whether an instability is expected to occur. From this it follows that you need some kind of shear to obtain an instability, it is easier to get an instability for long waves (perturbations) with large , and the and therefore the beta-effect is stabilizing.
Furthermore, for the baroclinic Rossby radius of deformation it holds that . Now the instability criteria simplify to
or .
From this analysis it also follows that baroclinic instabilities are important for small Rossby numbers, where .
Observations of Baroclinic instabilities and eddies
Recently, many observations on mesoscale eddies in the ocean have been made using sea surface height data from altimeters. It has been shown that regions with the highest growth rate of baroclinic instabilities indeed match the regions which are rich in eddies. Furthermore, also the trajectories of both cyclonic and anticyclonic eddies can be studied. From this it follows that there are approximately the same number of cyclonic and anticyclonic eddies observed and therefore it is concluded that the generation of these two types is very similar. However, when considering longer lived eddies, they found that anticyclonic eddies clearly dominate. This implies that cyclonic eddies are less stable and therefore decay more rapidly. In addition, there are no eddies present above shallows in the ocean due to topographic steering as a result of the Taylor–Proudman theorem. Lastly, extremely long lived eddies with lifetimes over 1.5 to 2 years are only found in gyres, most likely because the background flow is small here.
Four different types of Baroclinic instabilities can be distinguished:
Eady type
Charney surface type
Charney bottom type
Phillips type
These four types are based on classical models (the classic Eady Model, the Charney model, and the Phillips model, respectively), but can also be distinguished from observations. Overall, from the observed baroclinic instabilities 47% is the Charney surface type, 33% the Phillips type, 13% the Eady type and only 7% the Charney bottom type. These different types of Baroclinic instabilities all lead to different types of eddies. Important here is ψ, which is the absolute value of the complex eigenfunction of the stream function of the horizontal velocity. It represents the vertical structure of the Baroclinic instability and ranges from 0, which implies a very low chance of an instability of this type and thus also eddy to form, to 1, which means a high chance.
The Eady type has a maximum ψ of one at the top and bottom, and a minimum around 0.5 halfway the total depth. For this type of model, an eddy thus occurs at both the surface and bottom of the ocean. It is therefore also called the surface- and bottom-intensified type and found mainly at high latitudes. The Charney surface type is surface-intensified and has a maximum ψ at the surface, whereas the Charney bottom type only shows baroclinic instabilities at the bottom. For the Charney bottom type ψ is also at the surface and increases to one over increasing depth. The Charney surface type is found in the subtropics, whereas the Charney bottom type is present at high latitudes. Lastly, for the Phillips type, ψ is zero at the surface, strongly increases to one just below the surface, and then slowly decreases again to zero for increasing depths. The location of these Phillips type instabilities agree with the occurrence of subsurface eddies, again supporting the idea that the Baroclinic instabilities lead to the formation of eddies. They are mostly found in the tropics and the eastern return flow of the subtropical gyres.
It was found that the type of Baroclinic instability present also depends on the mean background flow. An Eady type is preferred for a strong eastward mean flow in the upper ocean, and a weak westward flow in the deeper ocean. For the Charney bottom type this is similar, but now the westward flow in the deeper ocean is found to be stronger. The Charney surface and Phillips types exist for weaker background flows, also explaining why these are dominant in the ocean gyres.
References
Fluid dynamics
Physical oceanography | Baroclinic instabilities in the ocean | [
"Physics",
"Chemistry",
"Engineering"
] | 2,038 | [
"Applied and interdisciplinary physics",
"Chemical engineering",
"Physical oceanography",
"Piping",
"Fluid dynamics"
] |
67,687,783 | https://en.wikipedia.org/wiki/Topographic%20steering | In fluid mechanics, topographic steering is the effect of potential vorticity conservation on the motion of a fluid parcel. This means that the fluid parcels will not only react to physical obstacles in their path, but also to changes in topography or latitude. The two types of 'fluids' where topographic steering is mainly observed in daily life are air (air can be considered a compressible fluid in fluid mechanics) and water in respectively the atmosphere and the oceans. Examples of topographic steering can be found in, among other things, paths of low pressure systems and oceanic currents.
In 1869, Kelvin published his circulation theorem, which states that a barotropic, ideal fluid with conservative body forces conserves the circulation around a closed loop. To generalise this, Bjerknes published his own circulation theorem in 1898. Bjerknes extended the concept to inviscid, geostrophic and baroclinic fluids, resulting in addition of terms in the equation.
Mathematical description
Circulation
The exact mathematical description of the different potential vorticities can all be obtained from the circulation theorem of Bjerkness, which is stated as
.
Here is the circulation, the line integral of the velocity along a closed contour. Also, is the material derivative, is the density, is the pressure, is the angular velocity of the frame of reference and is the area projection of the closed contour onto the equatorial plane. This means the bigger the angle between the contour and the equatorial plane, the smaller this projection becomes.
The formula states that the change of the circulation along a fluid's path is affected by the variation of density in pressure coordinates and by the change in equatorial projection of the contour. Kelvin assumed both a barotropic fluid and a constant projection. Under these assumptions the right hand side of the equation is zero and Kelvin's theorem is found.
Shallow water
When considering a relatively thin layer of fluid of constant density, with on the bottom a topography and on top a free surface, the shallow water approximation can be used. Using this approximation, Rossby showed in 1939, by integrating the shallow water equations over the depth of the fluid, that
.(1)
Here is the relative vorticity, is the Coriolis parameter and is the height of the water layer. The quantity inside the material derivative was later called the shallow water potential vorticity.
Layered atmosphere
When considering an atmosphere with multiple layers of constant potential temperature, the quasi-2D shallow water equations on a beta plane can be used. In 1940, Rossby used this to show that
.(2)
Here is the relative vorticity on an isentropic surface, is the Coriolis parameter and is a quantity measuring the weight of unit cross-section of an individual air column in the layer. This last quantity can also be seen as a measure of the vortex depth. The potential vorticity defined here is also called the Rossby potential vorticity.
Continuous atmosphere
When the approximation of the discrete layers is relaxed and the fluid becomes continuous, another potential vorticity can be defined which is conserved. It was shown by Ertel in 1942 that
.(3)
Here is the absolute vorticity, is the gradient in potential temperature and the density. This potential vorticity is also called the Ertel potential vorticity.
To get to this result, first recall the circulation theorem from Kelvin
.
If the coordinate system is transformed to the one of the local tangent plane coordinates and we use potential temperature as the vertical coordinate, the equation can be slightly rewritten to
.
Where now is the local circulation in the frame of reference, is Coriolis parameter and is the area on an isentropic surface over which the circulation .
Because the local circulation can be approximated as a product between the area and the relative vorticity on the isentropic surface, the circulation equation yields
.
When a fluid parcel is between two isentropic layers and the pressure difference between these layers increases, the fluid parcel is 'stretched'. This is because it wants to conserve the potential temperature at each side of the parcel. To conserve the mass, this horizontally thins the fluid parcel while it is vertically stretched. So the area of the isentropic surface, , is a function of how quickly the lines of equal potential temperature change with pressure:
.
In the end this yields
,
which is exactly the result found by Ertel, written in a slightly different way. Note that when assuming a layered atmosphere, the gradient in the potential temperature becomes an absolute difference and the result from Kelvin for a layered atmosphere can be found. Also note that when the fluid is incompressible, the layer depth becomes a measure for the change in potential temperature. Then the result for shallow water potential vorticity can be extracted again.
Effect
The different definitions of potential vorticity conservation, resulting from different approximations, can be used to explain phenomena observed here on earth. Fluid parcels will move along lines of constant potential vorticity.
Oceans
Because the scale of large flows in the oceans is much larger than the depth of the ocean, the shallow water approximation and thus (1) can often be used. On top of that, the changes in relative vorticity are very small with respect to the changes in the Coriolis parameter. The direct result of that is that for a fluid parcel a change in ocean floor depth will have to be compensated by a change in latitude. In both hemispheres this means that a rising ocean floor, so a decrease in water depth, results in a deflection equatorwards.
This phenomenon can explain different currents found on earth. One of them is the specific path the water takes in the Antarctic Circumpolar Current. This path is not a straight line, but curves according to the bathymetry.
Another one is the water flowing through the Luzon Strait. Researchers Metzger and Hurlburt showed that the existence of three small shoals can explain the deflection of the current away from the strait instead of flowing through the strait.
Atmosphere
In the atmosphere, topographic steering can also be observed. In most cases, the simple modeled layer of the atmosphere and thus (2) can explain the phenomena. When an isentropic layer flows zonally from west to east over a mountain, the topographic steering can create a wave-like pattern on the lee-side and eventually form an alternating pattern of ridges and troughs.
Upon approach of the mountain, the layer depth will increase slightly. This is because the incline of the isentropic surfaces is less steep at the top of the layer than at the bottom. When the layer depth increases, the change in potential vorticity is countered by an increase in relative vorticity as well as the Coriolis parameter. The vortex will begin to move away from the equator and begin to rotate cyclonically.
During the crossing of the mountain, the effect is reversed due to the shrinking of the layer depth. The vortex will rotate anti-cyclonically and move towards the equator. As the vortex leaves the mountain, the resulting latitude is closer to the equator than before. This means vortices will have a cyclonic rotation on the lee-side of the mountain and be turning northwards. The Coriolis parameter and relative vorticity increase and decrease in antiphase. This results in an alternation of cyclonic and anti-cyclonic flows after the mountain. The change in the Coriolis parameter and relative vorticity work against each other, creating a wave-like phenomenon.
When looking at zonal flow from east to west, this effect is not occurring. This is because the change in the Coriolis parameter and the change in relative vorticity work in the same direction. The flow will return to zonal again some time after crossing the mountain.
The effect described is often credited as the source of the tendency of cyclogenesis on lee-sides of mountains. One example of this are the so called Colorado lows, troughs originating from air passing over the Rocky Mountains.
See also
Potential vorticity
Circulation (physics)
Kelvin's circulation theorem
Shallow water equations
References
Oceanography
Atmospheric dynamics
Fluid mechanics | Topographic steering | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,672 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Atmospheric dynamics",
"Oceanography",
"Civil engineering",
"Fluid mechanics",
"Fluid dynamics"
] |
67,692,128 | https://en.wikipedia.org/wiki/Thermohaline%20staircase | Thermohaline staircases are patterns that form in oceans and other bodies of salt water, characterised by step-like structures observed in vertical temperature and salinity profiles; the patterns are formed and maintained by double diffusion of heat and salt. The ocean phenomenon consists of well-mixed layers of ocean water stacked on top of each other. The well-mixed layers are separated by high-gradient interfaces, which can be several meters thick. The total thickness of staircases ranges typically from tens to hundreds of meters.
Two types of staircases are distinguished. Salt-fingering staircases can be found at locations where relatively warm, salty water overlies relatively colder, fresher water. Here, large-scale temperature and salinity both increase upward, making the mixing process of salt fingering possible. Locations where you can find these type of staircases are for example beneath the Mediterranean outflow, in the Tyrrhenian Sea, and northeast Caribbean. Diffusive staircases can be found at locations where both temperature and salinity increase downward, for example in the Arctic Ocean and in the Weddell Sea. An important feature of thermohaline staircases is their extreme stability in space and time. They can persist several years or more and can extend for hundreds of kilometers. The interest in thermohaline staircases is partly due to the fact that the staircases represent mixing hot spots in the main thermocline.
Extensive definition and detection
To determine the presence of thermohaline staircases, the following steps can be taken according to the algorithm designed by Van der Boog.
The first step of the algorithm is to identify the mixed layers by locating weak vertical density gradients in conservative temperature and absolute salinity. To do so, the threshold gradient method is used with a threshold of , with the pressure and the reference pressure. The vertical conservative temperature, absolute salinity, and potential density gradients are all below the threshold value by meeting these three conditions:
with the thermal expansion coefficient, the haline contraction coefficient, the reference density, the conservative temperature, and the salinity.
The second step is to define the interface, which is the part of the water column in the middle of two mixed layers. It is required that the conservative temperature, absolute salinity, and potential density variations in the interface should be larger than the variations within each mixed layer to ensure a stepped structure. Therefore the following conditions should be met:
where subscript 1 corresponds to the mixed layer above the interface and subscript 2 corresponds to the mixed layer below the interface.
The third step is to limit the interface height . The interface height should be smaller than the height of the mixed layers directly above and below the interface . This condition has to be met in order to ensure that the interface is relatively thin compared to the mixed layers surrounding it. Furthermore, the algorithm removes all interfaces with conservative temperature or absolute salinity inversions to make sure that it only detects step-like structures that are associated with the presence of thermohaline staircases.
The fourth step is to determine the double-diffusive regime (salt-fingering or diffusive) of each interface. When both conservative temperature and absolute salinity of the mixed layers above and below the interface increase downward, the interface belongs to the diffusive regime. When both conservative temperature and absolute salinity of the mixed layers above and below the interface both increase upward, the interface is classified as the salt-fingering regime.
Finally, only vertical sequences of at least two interfaces in the same double-diffusive regime are selected, where the interfaces should be separated from each other by only one mixed layer. This way, most thermohaline intrusions are removed, as these are characterised by alternating mixed layers in the diffusive and salt-finger regimes. Furthermore, the algorithm removes salt-fingering interfaces and diffusive-convective interfaces outside their favourable Turner angle , a parameter used to describe the local stability of an inviscid water column. Interfaces with salt-fingering characteristics should correspond to Turner angles of and interfaces with diffusive-convective characteristics should correspond to Turner angles of .
Staircase origin
The origin of thermohaline staircases relies on double diffusive convection, and specifically on the fact that heated water diffuses more readily than salty water. However, there is still much debate on which specific mechanism of layering plays a role. Six possible mechanisms are described below.
Collective instability mechanism
This mechanism, involving collective instability, relies on the idea that after a period of active internal wave motion, layers appear. This hypothesis was motivated by laboratory experiments in which staircases formed from the initially uniform temperature and salinity gradients. Growing waves might overturn and generate the stepped structure of thermohaline staircases.
Thermohaline intrusion mechanism
This hypothesis states that staircases represent the final stage in the evolution of thermohaline intrusions. Intrusions can evolve either to a state consisting of alternating salt-finger and diffusive interfaces separated by convecting layers, which is common at high density ratio , or to a series of salt-finger interfaces when the density ratio is low . This proposition relies on the presence of lateral property gradients to drive interleaving. This mechanism, where thermohaline intrusions are transformed into staircases, are likely to exist in strong temperature-salinity fronts.
Metastable equilibria mechanism
A different theory states that staircases represent distinct metastable equilibria. It is suggested that finite amplitude perturbations to the gradient state force the system into a layered regime where it can remain for long periods of time. Large initial perturbations to the gradient state make the transition to the staircase more likely and accelerate the process. Once the staircase is created, the system becomes resilient to further structural changes.
Applied flux mechanism
The applied flux mechanism was mainly tested in laboratory experiments, and is most likely at work in cases when layering is caused by geothermal heating. When a stable salinity gradient is heated from below, top-heavy convection will take place in the lower part of the water column. The well-mixed convecting layer is bounded from above by a thin high-gradient interface. By a combination of molecular diffusion and entrainment across the interface, heat is transferred upward from the convecting layer. The molecular transfer of heat exceeds that of salt, resulting in a supply of buoyancy to the region immediately above the interface. This leads to the formation of a second convecting layer. The process can repeat itself over and over, which results in a sequence of mixed layers separated by sharp interfaces, a thermohaline staircase.
Negative density diffusion
In salt-fingering staircases, vertical temperature and salinity fluxes are downgradient, while the vertical density flux is upgradient. This is explained by the fact that the potential energy released in transporting salt downward must exceed that expended in transporting heat upward, resulting in a net downward transport of mass. This negative diffusion sharpens the fluctuations and therefore suggests a means for generating and maintaining staircases.
Instability of flux-gradient laws
This mechanism is based on negative density diffusion as well. However, instead of combining temperature and salinity into a single density term, it treats both density components individually. In a publication by Radko, it is shown that formation of steps in numerical models is caused by the parametric variation of the flux ratio as a function of the density ratio , leading to an instability of equilibrium with uniform stratification. These unstable perturbations continuously grow in time until well-defined layers are formed.
Observations
Two types of staircases exist: salt-fingering staircases, where both temperature and salinity of the mixed layers decrease with pressure (and therefore with depth); and diffusive staircases, where both temperature and salinity of the mixed layers increase with pressure (so with depth).
Salt-fingering staircases
Most observations of salt-fingering staircases have come from three locations: the western Tropical Atlantic, the Tyrrhenian Sea, and the Mediterranean outflow. In these regions the density ratio has a very low value, which appears to be a condition for sufficient staircase formation. No staircases have been reported for values below 2. For values below 1.7, the step-like structures in vertical temperature and salinity profiles becomes apparent. Moreover, the spatial pattern of staircases is very sensitive to . With decreasing , the height of steps sharply increases and the staircases become more pronounced. The importance of the density ratio for the formation is a sign that staircases are a product of double diffusive convection.
In the Tyrrhenian Sea, thermohaline staircases due to salt fingers are observed. The step-like shape is visible in the vertical temperature and salinity profiles. Staircases in the Tyrrhenian Sea show a very high stability in space and time. The weak deep circulation in this area might be an explanation for this stability.
Diffusive staircases
Diffusive staircases are found at higher latitudes. In the Arctic Ocean, warm and salty water from the Atlantic enters the Arctic basin and subducts beneath the colder and fresher waters of the upper Arctic. In some regions, also Pacific waters sit below the mixed layer and above the Atlantic layer. A thermocline is found at the top of the Atlantic Water layer. In that region, temperature and salinity increases with depth and step-like patterns are observed in vertical temperature and salinity profiles. These staircases mediate the heat transport from the warm water of Atlantic origin to the Arctic halocline and therefore serve as an important process in determining the heat flux from the Atlantic Water upward to the sea ice. Staircases in the Arctic are characterised by much smaller steps than in salt-fingering staircases.
On a much smaller scale, diffusive staircases have also been observed in low- and mid-latitudes. For example, Lake Kivu and Lake Nyos show characteristic staircase patterns. In these salt-water lakes, geothermal springs supply heat at the bottom resulting in the diffusive background stratification.
See also
Salt fingering
Double diffusive convection
Oceans portal
References
Patterns
Physical oceanography | Thermohaline staircase | [
"Physics"
] | 2,142 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
67,696,055 | https://en.wikipedia.org/wiki/.sexy | .sexy is a generic top-level domain owned by Uniregistry. Delegated on 14 November 2013, .sexy was the subject of controversy due to opposition from the government of Saudi Arabia and privacy concerns regarding registering domains.
History
.sexy, along with .tattoo, was one of the first two gTLDs launched by Uniregistry on 14 November 2013. Its sunrise period, during which pre-existing trademark holders may register URLs prior to general availability to prevent domain squatting, lasted from 11 December 2013 to 9 February 2014, and it entered general availability on 25 February 2014. .sexy was one of the first hundred gTLDs to be delegated. Prior to its release, .sexy was one of many announced gTLDs, variously reported as 31 and over 160, that the Communication and Information Technology Commission of the government of Saudi Arabia objected to; other TLDs found objectionable included .gay, .casino, .sucks, .wine, and .bible.
On the first day of .sexy's general availability, around 2,000 domain names were registered, which commentators described as a "disappointing" low showing. The domain had a comparable number of first-day registrations as unpopular domains from Uniregistry's competitor Donuts such as .gallery and .estate. .sexy's launch was hampered by a lack of support from and availability at major domain name registrars such as GoDaddy, based in privacy concerns around Uniregistry's demands that registrants inform Uniregistry of their real names and identities to purchase domains. A number of pre-orders of .sexy domains were also stymied by domain name collision, the phenomenon where a private (intranet) domain name system queries a public one, and by names that had been pre-ordered being reserved by Uniregistry.
In 2015, a survey by ICANN concluded networks in Iran were systematically blocking .sexy domains. In 2017, Uniregistry CEO Frank Schilling increased the price of .sexy and a number of other domains due to low uptake. Schilling stated that the costs of running a TLD demanded that low-use TLDs, such as .sexy, be sold at higher price points in order to turn a profit.
Usage
According to Schilling, .sexy domains are intended "for fun, for fashion, for recreation, as a novelty, [and] for risqué content". .sexy has also been associated with cybersquatting, with cybersquatters purchasing .sexy domains for major companies who rejected having their trademarks associated with adult industries; such misuse was predicted prior to the domain's release, with commentators describing them as potentially costing companies "serious money". Explicit content is prohibited on the home pages of websites with .sexy domains, although sites are permitted to have a landing page with a warning button that needs to be clicked through to access such content.
, there are 10,203 registered .sexy domains, making up 0.03% of all domains. NameCheap holds the majority of the .sexy market share with 65.8%, although 14% of .sexy domains are registered by registries outside the top ten. The domain's lack of popularity was described by domain expert Kevin Murphy as a failure of Schilling's own practices:
See also
Internet pornography
References
Top-level domains
Generic top-level domains
2013 introductions
Sexuality and computing | .sexy | [
"Technology"
] | 704 | [
"Computing and society",
"Sexuality and computing"
] |
67,696,150 | https://en.wikipedia.org/wiki/Lagrangian%20ocean%20analysis | Lagrangian ocean analysis is a way of analysing ocean dynamics by computing the trajectories of virtual fluid particles, following the Lagrangian perspective of fluid flow, from a specified velocity field. Often, the Eulerian velocity field used as an input for Lagrangian ocean analysis has been computed using an ocean general circulation model (OGCM). Lagrangian techniques can be employed on a range of scales, from modelling the dispersal of biological matter within the Great Barrier Reef to global scales. Lagrangian ocean analysis has numerous applications, from modelling the diffusion of tracers, through the dispersal of aircraft debris and plastics, to determining the biological connectivity of ocean regions.
Techniques
Lagrangian ocean analysis makes use of the relation between the Lagrangian and Eulerian specifications of the flow field, namely
where defines the trajectory of a particle (fluid parcel), labelled , as a function of the time , and the partial derivative is taken for a given fluid parcel . In this context, is used to identify a given virtual particle - physically it corresponds to the position through which that particle passed at time . In words, this equation expresses that the velocity of a fluid parcel at the position along its trajectory that it reaches at time can also be interpreted as the velocity at that point in the Eulerian coordinate system. Using this relation, the Eulerian velocity field can be integrated in time to trace a trajectory,
where is a dummy integration variable. In this equation, is continuous in space – for the integration of trajectories in a Lagrangian ocean model, the velocity field must be evaluable at any point in space. Spatial interpolation is used so that the velocity field can be evaluated at points inside the grid cells outputted by OGCMs.
Time Integration
In some cases, the time integration is performed using explicit time-stepping methods. Lagrangian ocean analysis codes may make use of, for instance, an Euler method, or a higher order method, such as Runge-Kutta 4 or Runge-Kutta 4-5. If the timestep of the integration method is shorter than the time resolution of the Eulerian velocity field used as an input, then the velocity field must be interpolated in the temporal domain, so that there is a velocity value to be integrated for each time.
To ensure volume conservation in integrating the trajectories, symplectic methods, can be used. These methods are generally implicit in nature, requiring extra computation when compared to explicit methods.
Alternatively, if each component of the flow velocity within a spatial grid is assumed to vary linearly along its axis, trajectories can be analytically calculated. If the velocity field is steady-state, then trajectories can be treated as streamlines, and considered together in bundles known as stream tubes, which bound fluid flow in different parts of the spatial domain. If the velocity field provided as the starting point of the Lagrangian analysis is a divergence-free flow, the volume of fluid moving through a stream tube is conserved throughout the stream tube. To show this mathematically, the starting point is the condition that the divergence of the velocity field is zero,
Integrating this over the volume of a stream tube, the divergence theorem can be used to show that
where is the normal to the surface of the stream tube, denotes the entire volume of the stream tube and its surface. If the streamlines and pathlines (trajectories) are equivalent, as is the case for steady-state (non time-evolving) flows, then the walls of the stream tube do not contribute to the integral, as the flow cannot cross them. Thus, only the ends, and will contribute to the integral, so
Physically, this equation expresses that the fluid flux passing through the two ends of the streamtube are equal, demonstrating the volume conservation. The equation also shows that the area of each end of the stream tube is inversely proportional to the speed of the normal flow through it. These features of the analytical method of calculating trajectories lend themselves to Lagrangian analyses primarily concerned with the advective (as opposed to diffusive) component of the flow.
There exists a caveat to this approach: given that the velocity fields considered can be time-evolving, the equivalence between stream tubes and material pathways may not hold. Lagrangian ocean models can address this formalism by considering the flow field to be a piecewise function in the temporal domain, where each sub-function is a steady-state velocity field. A Boussinesq model, in which flow is incompressible and thus non-divergent, can be used to generate the velocity field used as an input for the Lagrangian analysis code to ensure volume is conserved when using this method.
Incorporating Diffusion
For a Lagrangian ocean analysis code to include the effects of molecular diffusion, or other small-scale mixing which may be modelled as a diffusive process, stochastic terms must be added to the trajectory computations. Stochastic terms may be added in accordance with a stochastic differential equation (SDE) derived from the tracer diffusion equation in the form of the Fokker-Planck equation. This method requires that a diffusion tensor be provided.
Rather than using an SDE derived with a diffusion tensor, Lagrangian ocean models may instead find an SDE based on how well the resulting diffusivity statistics fit either observations or models built with a finer resolution. This method involves the use of a Markov chain; the order of the Markov chain used is another point where different Lagrangian analysis codes differ.
Online and Offline Analysis
Lagrangian ocean analysis codes can be characterised as online or offline. Online codes work in tandem with the Eulerian model that outputs the velocity field: each time the Eulerian model updates, the trajectories are timestepped using the new velocity information. The Lagrangian analysis packages available with some OGCMs are examples of online Lagrangian ocean analysis codes. Offline codes calculate trajectories using stored velocity fields outputted by Eulerian models at a prior stage. As a result of this, offline models may be used to calculate trajectories forwards or backwards through time; the latter may be helpful in determining the origins of water masses. As of 2018, there are no examples of online models that use the tracer equation to compute diffusive effects.
Applications
One strength of Lagrangian ocean models is that they can be less computationally expensive than calculating the advection diffusion of a tracer concentration within the Eulerian paradigm: for each timestep, the Lagrangian code needs only evaluate the position of each virtual particle, as opposed to the Eulerian model which must explicitly calculate the tracer concentration in every grid cell. In the case of offline models, trajectories may be advected backwards in time which can be useful in finding sources of the material being tracked.
Lagrangian ocean analysis has found use in tracking water masses, for instance tracking the source and pathways of the Subantarctic Mode Water, as well as how its temperature and salinity evolve along its path. Another application of Lagrangian modelling techniques is in simulating the dispersal of biological matter in different ocean regions to gauge their biological connectivity.
Lagrangian ocean analysis has also been used to model the dispersal of materials originating in human activities, for example in tracking the spread of oil following the 2010 Deepwater Horizon oil spill and in attempts to model dispersal of Debris from flight MH370 in the Indian Ocean since 2014. Furthermore, Lagrangian ocean analysis has played a role in the tracking of the transport of plastics across the global ocean, for instance helping researchers estimate the fraction of plastic debris that ends up at the coast.
Examples
While small scale Lagrangian analysis codes have been written for many individual research projects, there exists a smaller number of community codes that can implement Lagrangian ocean models on a global scale. The differences between 10 such community codes are illustrated in the venn diagram, where they are grouped in terms of whether they are offline, whether they include a stochastic term to model diffusion or small-scale turbulent mixing, and whether they compute trajectories analytically. All of the codes in the diagram that lie outside of the analytic trajectories circle employ explicit integration methods to compute the trajectories. Ariane, for example, is a code that runs offline, calculates trajectories with the analytic streamtube method and does not model diffusive effects. By contrast, Parcels computes particle trajectories through explicit time-stepping and can include stochastic terms if the user desires; it is also offline. A caveat to this is that Parcels' integration framework is customisable, and could be reprogrammed to instead use the analytical method to calculate trajectories.
See also
Lagrangian and Eulerian specification of the flow field
Lagrangian analysis
Lagrangian particle tracking
References
Physical oceanography | Lagrangian ocean analysis | [
"Physics"
] | 1,898 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
67,697,360 | https://en.wikipedia.org/wiki/Webwereld | Webwereld was a Dutch online newspaper about IT by the International Data Group. It was the oldest Dutch technology website until it was discontinued in 2020.
History
Webwereld was founded in 1995 by Oscar Kneppers, who got the idea after visiting Silicon Valley in the summer of that year. Another Dutch tech website Tweakers.net was founded in 1998 after Femme Taken concluded that the moderation on Webwereld was too strict.
In August 2011, Webwereld published about court documents in the Apple Inc. v. Samsung Electronics Co. lawsuit. In October 2011, Webwereld started Lektober (portmanteau of leak and October in Dutch) where they publicized about a security bug every day of the month in a website of a well-known Dutch organization. In 2011, Trans Link Systems considered suing Webwereld because they sold RFID writers that could be used for free traveling in Dutch public transport.
Journalists
Brenno de Winter
References
External links
Computing websites
Dutch-language websites
Dutch news websites | Webwereld | [
"Technology"
] | 210 | [
"Computing websites"
] |
66,207,602 | https://en.wikipedia.org/wiki/Nidufexor | Nidufexor (LMB-763) is a drug which acts as a partial agonist of the farnesoid X receptor (FXR). It has reached Phase II clinical trials for the treatment of diabetic nephropathy and nonalcoholic steatohepatitis.
See also
GSK-4112
SR9009
SR9011
References
Farnesoid X receptor agonists | Nidufexor | [
"Chemistry"
] | 89 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
66,212,396 | https://en.wikipedia.org/wiki/Nukegate%20scandal | The Nukegate scandal was a political and legal scandal that arose from the abandonment of the Virgil C. Summer nuclear expansion project in South Carolina by South Carolina Electric & Gas (SCE&G) and the South Carolina Public Service Authority (known as Santee Cooper) in 2017. It was the largest business failure in the history of South Carolina. Before its termination, the expansion was considered the harbinger of a national nuclear renaissance. Under joint ownership, the two utilities collectively invested $9 billion into the construction of two nuclear reactors in Fairfield County, South Carolina from 2008 until 2017. The utilities were able to fund the project by shifting the risk onto their customers using a state law that allowed utilities to raise consumers' electricity rates to pay for nuclear construction.
In 2008, the utilities contracted with Westinghouse to build two AP1000 nuclear reactors for an estimated cost of $9.8 billion. The AP1000 design was unique because it relied on pre-fabricated parts which allowed for modular construction. In 2013, construction began at V. C. Summer. However, numerous delays occurred from 2014 to 2017 due to manufacturing errors and incompetence. In 2017, the estimated construction cost had grown to $25 billion. Westinghouse, hobbled by the costs of the V. C. Summer expansion and a separate project in Georgia, filed for Chapter 11 bankruptcy in March 2017. Several months later the project was abandoned by Santee Cooper and SCE&G's parent company, SCANA. Ratepayers continue to pay increased rates for the expansion despite its termination.
The economic losses and subsequent public outrage drastically altered the future of both utilities. The total cost paid by both utilities in legal settlements to ratepayers and shareholders exceeded a billion dollars. The stock of SCANA, the only Fortune 500 company based in South Carolina, dramatically fell. Both SCANA and SCE&G merged with Dominion Energy in 2019. Until the COVID-19 pandemic, the largest issue debated in the South Carolina General Assembly was whether or not to privatize Santee Cooper. In 2021, the General Assembly ultimately decided to reform the organization instead. Santee Cooper will remain under state ownership. However, its board will undergo change.
As a result of Nukegate, two SCANA executives, CEO Kevin Marsh, and Vice President Stephen Byrne, pleaded guilty for fraud after being charged with the crime by the U.S. Attorney's office. Their crimes centered around their efforts to hide the construction delays from shareholders and regulators. The construction of the two units needed to be finished by 2020 in order to qualify for over $2 billion in federal tax credits. The viability of the project relied on receiving the tax credits. However, both men admitted that they knew the project was not going to be completed in time to qualify for the credits and that they hid that information from regulators and shareholders. Both men have also been indicted by the U.S. Securities and Exchange Commission for securities fraud. On October 7, 2021, Marsh was sentenced to two years in prison. Byrne was sentenced to fifteen months in prison on March 8, 2023. As of March 2023, two executives at Westinghouse have also been charged with crimes. However, due to Santee Cooper's limited involvement, no executives from that organization were charged with any crimes.
Construction
Troubled construction
In May, 2008, SCE&G (a subsidiary of SCANA) and Santee Cooper announced that they had signed an engineering, procurement and construction contract with Westinghouse to build two AP1000 nuclear reactors. The CEO of Santee Cooper cited the state's projected growth as a determining factor for increasing the utility's energy capacity. Both utilities were joint owners (SCE&G owned 55 percent, Santee Cooper owned 45 percent) in the project and shared operating costs. The two reactors, with an estimated cost of $9.8 billion, would be the first built in the United States in the last thirty years and were heralded as leading the United States into a new nuclear renaissance.
The AP1000 design was seen as novel because of its simplified structure and use of pre-fabricated nuclear reactor parts that allowed for modular construction. Construction began on the units in 2013 after the design was approved by the Nuclear Regulatory Commission. However, contractors lacked the requisite experience because the nuclear power construction industry had stagnated for thirty years. The stagnation also led to a dearth in adequate supply chains. Ultimately, Westinghouse had to take over the construction of the units itself, something the company was not qualified for.
Westinghouse's management of the construction proved to be calamitous. The construction site employed five thousand laborers, who built two new concrete plants on the site to continuously pour concrete as well as a seven-story-tall building to assemble structural modules. But the site lacked a fully-integrated construction schedule and the pre-fabricated nuclear reactor parts that arrived on-site had been manufactured incorrectly, which caused significant delays. In 2008, the initial cost estimate of the expansion was $9.8 billion, but by 2017 it had ballooned to $25 billion.
During the construction process, Westinghouse and other contractors at V. C. Summer violated state law by having unlicensed workers create mechanical and electrical blueprints without having a professional engineer sign off on them. SCANA received a memo from Westinghouse's deputy counsel which stated that the contractors did not have to follow South Carolina law because the company's federal license superseded the state's requirements. Executives at Santee Cooper claim they were not made aware of the Westinghouse memo. The legality of that memo is in question. Nonetheless, the blueprints were often faulty and resulted in incorrect parts, thousands of engineering changes, and billions of dollars in wasted money".
2015 audit
Santee Cooper and SCE&G hired Bechtel to audit the project in 2015. Bechtel's draft audit stated that the nuclear reactors would not be finished in time to collect the $2 billion in federal tax credits which the project relied on. However, in Bechtel's final report released in February 2016, the previous finding was removed from the audit at the request of an attorney working for both utilities. Relying on the impression that the reactors would qualify for the tax credits, the state Public Service Commission approved an $800 million increase in the project's budget as well as a fixed-price contract with Westinghouse.
Westinghouse bankruptcy and the project's demise
On March 31, 2017, Westinghouse filed for Chapter 11 bankruptcy due to the costs incurred from both the V. C. Summer expansion as well as the construction of two additional units in Burke County, Georgia. The bankruptcy was seen as a huge blow for the nuclear energy industry. At the time, construction on both units was only 30 percent complete but the majority of the reactor parts were on-site. Santee Cooper decided to halt construction against SCANA's wishes. The utilities announced that the halt in construction was due in part to a change in the energy industry brought on by more energy efficient technology and the natural gas boom.
In July 2017, the companies announced that they had made an agreement with Toshiba, Westinghouse's parent company, to release Westinghouse from its prior obligations for $2.2 billion. Further, in 2020, Santee Cooper and Westinghouse announced a separate agreement to sell the remaining reactor parts and to share in the profits.
At the point of termination, SCE&G and Santee Cooper had invested $9 billion into the project. The announcement sent SCANA's stock reeling. The project became known as the largest business failure in the state's history. The subsequent federal investigation of the failure led to it being nicknamed "Nukegate", a phrase derived from the Watergate scandal.
Base Load Review Act
The failure was made possible by the Base Load Review Act that was passed by the South Carolina General Assembly in April, 2007. The act made it easier for electric utilities to charge ratepayers for the construction of nuclear reactors. The bill, sponsored by state senator Glenn McConnell, essentially allowed the utilities to shift the risk of the construction to ratepayers. Utilities would be able to file a request with the Public Service Commission to raise rates for plant construction. If the commission found the application to be "prudent", the commission would issue a project development order allowing the utility to increase rates. However, the statute did not define what was or was not "prudent". Critics of the act argued that "any management decision by the utility that impact[ed] the cost and schedule of the project" essentially had to be "deemed prudent by the Public Service Commission if it advance[d] the completion of the project", and that this resulted in "cost overruns and schedule delays [becoming] a natural unintended consequence" of the act.
Governor Mark Sanford refused to sign the bill but after a five-day moratorium, the bill became law on May 3, 2007. Sanford's chief of staff later said that the Base Load Review Act "was probably the clearest case [he] could ever see of a special interest using all of its power and leverage to get something passed". From its inception to its enactment, the bill's legislative process was considered remarkably fast.
From 2008 to 2016, SCE&G sought and received nine utility rate hikes to pay for the nuclear expansion. By 2017, SCE&G ratepayers had paid an additional $1.4 billion due to the hikes. A typical SCE&G consumer paid an extra $27 per electricity bill for the expansion, and a typical Santee Cooper consumer paid an extra $6.50. By 2018, South Carolina utility prices were among the highest in the country. This was made easier because in 2004 the General Assembly had gotten rid of the state's consumer advocate.
The South Carolina Senate unanimously repealed the act on May 9, 2018. In June 2018, Governor Henry McMaster's veto of the repeal was overridden by the General Assembly.
Legal ramifications
Stakeholder lawsuits
Both utilities settled lawsuits as a result of the expansion's failure. Attorneys representing SCE&G ratepayers and shareholders settled with the utility for $392.5 million ($200 million would be for ratepayers and $192.5 million for shareholders). Santee Cooper settled with its ratepayers and local electric co-operatives for $520 million. And in December 2020, the utility settled with investors who purchased bonds from the utility for $2 million.
In 2020, a judge struck down the city of Goose Creek's attempt to annex and then take over the power supply of a local aluminum smelter. The judge stated that it violated state law granting Santee Cooper exclusive service over the aluminum smelter's site. Commentators and lawmakers cited the Nukegate scandal as a reason why the city utility should be allowed to supply the aluminum smelter's electricity.
SEC lawsuit
In March 2020, the U.S. Securities and Exchange Commission sued SCANA, SCE&G, Kevin Marsh (SCANA's CEO at the time), and Steve Byrne (a former SCANA vice president) for repeatedly deceiving investors. The complaint alleged that the parties misled investors by claiming that the project would qualify for more than $1 billion in federal tax credits. On December 2, 2020, the SEC announced that SCANA and SCE&G agreed to settle the claims against them for $112.5 million in disgorgement fees as well as a $25 million penalty to be paid by SCANA (now Dominion Energy). The litigation against Marsh and Byrne is ongoing.
Criminal charges
SCANA
Federal prosecutors probed the V. C. Summer failure from 2017 to 2020. In July 2020, Byrne admitted to taking part in a conspiracy to hide damaging information from regulators as well as the public and therefore defrauding SCE&G customers. On November 24, 2020, Marsh announced he would also plead guilty to federal fraud charges. In December 2020, Marsh also pled guilty to an additional third charge: conspiracy to commit wire and mail fraud. Both men admitted to knowing that the project would not qualify for crucial federal tax credits with a deadline in 2020, and that they hid this information from shareholders.
Both men also admitted to providing false information in "earning calls, presentations and press releases" in order to benefit SCANA. They were made aware in 2015 that only 8% of the expansion had been completed and therefore V. C. Summer was unlikely to qualify for direly needed federal tax credits that had a 2020 deadline. Neither shared this information with shareholders or state regulators. Additionally, Byrne and Marsh ensured that the Bechtel report sent to Santee Cooper lacked damaging information.
The SEC has charged both Byrne and Marsh with securities fraud. The complaint alleges that:
Marsh was sentenced to two years in a federal prison on October 7, 2021 on charges of conspiracy to commit mail and wire fraud, and on October 11, 2021 he began to serve a two year state sentence concurrently as part of a plea deal. Byrne was sentenced to fifteen months in prison on March 8, 2023.
Westinghouse
On June 10, 2021, a former Westinghouse vice president, Carl Churchman, pled guilty to lying to federal investigators. The lie told to the FBI by Churchman was that Churchman had no role in relaying false completion projections to SCANA executives. He faces up to five years in prison. Assistant U.S. Attorney, Winston Holiday, has asserted that Churchman was a key witness in the ongoing investigation. In August 2021, another former Westinghouse executive, Jeffery A. Benjamin, was indicted for fraud and conspiracy.
Consequences for SCANA and Santee Cooper
SCANA-Dominion merger
SCANA faced virulent criticism following the collapse of the V. C. Summer expansion. The company was criticized for not having anyone with nuclear power experience on its board. The board itself was further criticized for either neglecting its financial oversight of the V. C. Summer project or for overseeing the incompetent management of the project. It was determined that throughout most of the project's existence, executives at SCANA knew the project's viability was at risk. But the company lacked the necessary oversight to oversee the project.
In 2018, Dominion Energy submitted a bid to purchase SCANA and SCE&G. The company offered and advertised a refund of $1,000 to customers. However, lawmakers realized that Dominion wanted to then recoup that money with higher rates over the next decade. In December 2019, Dominion Energy purchased SCANA and SCE&G with an updated bid that replaced the $1,000 checks with lower rates for customers. Customers will continue to pay an extra $2.3 billion to cover the expansion costs over the next two decades.
In 2021, Dominion Energy settled with the state tax agency on unpaid taxes owed due to the unfinished nuclear project at a cost of $165 million. As part of that settlement, the State and Dominion Energy agreed that Dominion would offset approximately a third of the unpaid taxes by turning over more than 2,900 acres of land which will ultimately become six new state parks.
Fate of Santee Cooper
2019-2020
Following the V. C. Summer failure, the predominant issue facing the South Carolina General Assembly from 2018 until the COVID-19 pandemic in 2020 was whether to sell Santee Cooper or to reform the utility's management. The sale of the utility is favored by Governor McMaster. He has called Santee Cooper a "rogue agency" due to its independence and financial problems. But the utility's debt consisting of $7 billion has complicated proposals. The legislature passed a law at the end of the 2020 session prohibiting Santee Cooper from "entering into agreements that could make it harder for the General Assembly to sell the state-owned utility" in 2021. In November 2020, Hugh Leatherman, the chairman for the senate finance committee, called for the chairman of Santee Cooper to resign after the utility entered into a $638 million debt deal. Leatherman stated the deal may have violated state law. From at least 2017 to April 2021, the Santee Cooper board was without a permanent chairman.
Several companies submitted bids to purchase the utility and in February 2020, the South Carolina Department of Administration chose NextEra Energy of Florida as the recommended bidder. However, Santee Cooper submitted a separate plan to the general assembly to save ratepayers $2.3 billion over the next twenty years by pivoting from coal power plants towards renewable energy. The utility also advocated for reform of its board to bring in more expertise, and for a more open rate-setting and construction process.
Hugh Leatherman stated that without "meaningful reform that includes a new board and increased oversight" the only option was to divest the state from the utility. Santee Cooper consumers will continue to pay an extra 5% per electricity bill over the next twelve years to pay off the utility's debt.
2021
The future of Santee Cooper was a priority of the 2021 legislative session. In the first week of the 2021 session, the House Ways and Means Committee passed a bill creating a committee composed of members of the General Assembly to revisit a sale of Santee Cooper. The bill, which will be considered by the entirety of the House next, also includes "an amendment that would do away with NextEra as the preferred buyer" and "a provision for reforming Santee Cooper". On the Senate side of the General Assembly, the Senate Judiciary Committee, concerned about NextEra's behavior in a separate deal in Florida, requested more information from the utility concerning its bid in early January 2021. However, NextEra rejected to meet with the committee.
On April 22, 2021, the South Carolina Senate overwhelmingly voted on a bill that would reform Santee Cooper. Included in the bill is a timeline to replace every member of the Santee Cooper board, regulations subjecting the utility to reviews and oversight, and a ban on the utility's practice of giving executives large severance packages. In May, 2021, NextEra rescinded its bid to purchase Santee Cooper. In June 2021, the General Assembly met in conference for a special session to reconcile the two reform proposals from both houses of the General Assembly. On June 8, the reform bill was signed into law, largely consisting of the Senate proposal. Santee Cooper will remain under state ownership. Further, consumers will have a greater say in rate hikes and Santee Cooper will face greater accountability from state lawmakers.
Political ramifications
Most of the original state legislators who were serving in the General Assembly when the Base Load Review Act was passed are out of office. Some of their replacements in the state legislature, who were not in office at the time of its original passing, have faced criticism in the aftermath of the V. C. Summer failure, nonetheless. Observers believe state senator Luke Rankin’s association with Santee Cooper led him to have an unexpectedly tight primary race in 2020. Rankin was forced into a run-off which he ultimately won. Additionally, Governor McMaster has faced criticism for how he has handled the future of Santee Cooper.
The Coastal Conservation League criticized the General Assembly's consumer-centric approach when considering the future of Santee Cooper. The organization has claimed that the legislature has failed to evaluate potential consequences to the climate potentially caused by the different proposals.
References
External sources
Base Load Review Act
Base Load Review Act Cumulative Rate Increases
Base Load Review Act Repeal Legislation
Kevin Marsh testimony to NRC in 2008
Political scandals in South Carolina
Nuclear energy
2013 in South Carolina | Nukegate scandal | [
"Physics",
"Chemistry"
] | 4,044 | [
"Nuclear energy",
"Radioactivity",
"Nuclear physics"
] |
66,213,053 | https://en.wikipedia.org/wiki/Vaccine%20ingredients | A vaccine dose contains many ingredients (such as stabilizers, adjuvants, residual inactivating ingredients, residual cell culture materials, residual antibiotics and preservatives) very little of which is the active ingredient, the immunogen. A single dose may have merely nanograms of virus particles, or micrograms of bacterial polysaccharides. A vaccine injection, oral drops or nasal spray is mostly water. Other ingredients are added to boost the immune response, to ensure safety or help with storage, and a tiny amount of material is left-over from the manufacturing process. Very rarely, these materials can cause an allergic reaction in people who are very sensitive to them.
Volume
The volume of a vaccine dose is influenced by the route of administration. While some vaccines are given orally or nasally, most require an injection. Vaccines are not injected intravenously into the bloodstream. Most injections deposit a small dose into a muscle, but some are given superficially just under the skin surface or deeper beneath the skin.
Fluenz Tetra, a live flu vaccine for children, is administered nasally with 0.1ml of liquid sprayed into each nostril. The live typhoid vaccine, Vivotif, and a live adenovirus vaccine, licensed only for military use, both come as hard gastro-resistant tablets. The Sabin oral live polio vaccine is taken as two 0.05ml drops of a bitter salty liquid that was historically added to sugar cubes when given to young children. Rotarix, a live rotavirus vaccine, has about 1.5ml of liquid containing 1g of sugar to make it taste better. The Dukoral cholera vaccine comes as a 3ml suspension along with 5.6g of effervescent granules, which are mixed and added to around 150ml water to make a sweet raspberry flavoured drink.
At the other end of the volume scale, the smallpox vaccine is a minuscule 0.0025ml droplet that is picked up when a bifurcated needle is dipped into a vial containing around 100 doses. This needle is pricked 15 times into a small area of skin, just firmly enough to produce a drop of blood. A little larger is the BCG tuberculosis vaccine, which is 0.05ml for babies and children under 12, and 0.1ml for others. This tiny dose is inserted a couple of millimetres under the skin, producing a small blanched blister. Many vaccines for intramuscular injection have 0.5ml liquid, though a few have 1ml.
Some vaccines come with the active ingredients already suspended in solution and the syringe pre-filled (e.g., Bexsero meningococcal Group B vaccine). Others are supplied as a vial of freeze-dried powder, which is reconstituted prior to administration using a dilutant from a separate vial or pre-filled syringe (e.g., MMR vaccine). Infanrix hexa, the 6-in-1 vaccine that protects against six diseases, uses a combination approach: the Hib vaccine in the powder and DTPa-HBV-IPV in suspension. Alternatively two separate vaccine solutions are mixed just before administration (ViATIM hepatitis A and typhoid vaccine).
Immunogens
Many vaccines developed in the 20th century contain whole bacteria or viruses, which are either inactivated (killed), attenuated (weakened) or a strain chosen to be harmless in humans. Since these are so small, even a tiny amount of them contains a huge number of individuals.
With bacterial vaccines, we can enumerate this with an approximate number of bacteria cells. The live typhoid vaccine contains two billion viable cells of Salmonella enterica subsp. enterica serovar Typhi, which have been attenuated and cannot cause disease. The cholera vaccine has over thirty billion of each of four strains of Vibrio cholerae, which are inactivated by heat or formalin. The BCG vaccine, infant dose, contains between 100,000 and 400,000 colony-forming unit of live attenuated Mycobacterium bovis.
One way to count viruses is to observe their impact on host cells in tissue cultures. The two tablets of adenovirus vaccine, one with adenovirus type4 and the other with type7, each contain 32,000 tissue-culture infective doses (104.5 TCID50). The current live polio vaccine contains two serotypes of poliovirus: over 1million tissue-culture infective doses (106 TCID50) of type1 and over 630,000 (105.8 TCID50) of type3. The smallpox vaccine contains between 250,000 and 1,250,000 plaque forming units of live vaccinia virus per dose. The MMR vaccine contains 1,000 TCID50 measles, 12,500 TCID50 mumps and 1,000 TCID50 rubella live attenuated viruses.
Many modern vaccines are made of only the parts of the pathogen necessary to invoke an immune response (a subunit vaccine)for example just the surface proteins of the virus, or only the polysaccharide coating of a bacterium. Some vaccines invoke an immune response against the toxin produced by bacteria, rather than the bacteria itself. These toxoid vaccines are used against tetanus, diphtheria and pertussis (whooping cough). If the bacteria polysaccharide coating produces only a weak immune response on its own, it may be combined with (carried on) a protein that does provoke a strong response, which in turn improves the response to the weaker component. Such conjugate vaccines, may make use of a toxoid as the carrier protein. For all these, the quantity of immunogen is given by weight and sometimes expressed as international units (IU). The HVP vaccine contains 120 micrograms of the L1 capsid proteins from four types of human papillomavirus. The pneumococcal conjugate vaccine contains 32 micrograms of pneumococcal polysaccharide conjugated with CRM197 (a diphtheria toxin).
Another variant is the RNA vaccine, which contains mRNA embedded in lipid (fat) nanoparticles. The mRNA instructs body's own cell machinery to produce the proteins that stimulate the immune response. Comirnaty, the Pfizer-BioNTech COVID-19 vaccine contains thirty micrograms of BNT162b2 RNA.
Excipients
Excipients are substances present in the vaccine that are not the principal immunological agents. These may be present to enhance the vaccine's potency, ensure safety, aid with storage or are left over from the manufacturing process.
Adjuvants
Live vaccines produce a strong immune response that lasts a long time, but they are not suitable for people with weakened immune systems. Other kinds of vaccine, where the pathogen has been inactivated or that contain only part of the pathogen, often alone produce a weaker response and require booster doses. In these vaccines, a substance called an adjuvant is added to make the immune response stronger and longer lasting.
The most commonly used adjuvants are aluminium salts such as aluminium hydroxide, aluminium phosphate or potassium aluminium sulphate (also simply called alum). These aluminium salts can be responsible for soreness and redness at the vaccination site but do not cause any long-term harm to human health. The amount of aluminium in these vaccines ranges from 0.125 milligrams in the pneumococcal conjugate vaccine to 0.82 milligrams in the 6-in-1 vaccine. The Meningococcal Group B vaccine contains 0.5 milligrams and in the UK Immunisation Schedule is given at the same time as the 6-in-1 vaccine at eight and sixteen weeks, giving a combined dose of 1.32 milligrams of aluminium. Aluminium salts are commonly and naturally consumed in small quantities, and the quantity in this combined vaccine dose is lower than the weekly safe intake level. Vaccines containing aluminium adjuvants cannot be frozen or allowed to freeze accidentally in a refrigerator, as this causes the particles to coagulate and damages the antigen.
Another adjuvant used in some flu vaccines is an oil-in-water emulsion. The oil, squalene, is found in all plant and animal cells, and is commercially extracted and purified from shark liver. The flu vaccine for older adults, Fluad, uses an adjuvant branded MF59, which has squalene (9.75 milligrams), citric acid (0.04 milligrams) and three emulsifiers: polysorbate 80, sorbitan trioleate, sodium citrate (1.175, 1.175 and 0.66 milligrams respectively). The H1N1 swine-flu vaccine, Pandemrix, used the adjuvant branded AS03, which has squalene (10.69 milligrams), DL-α-tocopherol (11.86 milligrams) and polysorbate80 (4.86 milligrams)
Preservatives
Preservatives prevent the growth of bacteria and fungi, and are more commonly used in vaccines produced as multi-dose vials. They must also be non-toxic in the dose used and not adversely affect the immunogenicity of the vaccine. Thiomersal is the best known and most controversial preservative. It was phased out of UK vaccines between 2003 and 2005 and is not used in any routine vaccines in the UK. As a precaution, the US and Europe have also removed thiomersal from vaccines, despite there being no evidence of harm. The US-licensed vaccines in the routine paediatric schedule generally have no thiomersal at all; a few have only a trace amount as a residual from manufacturing (less than one microgram). This is also the case for influenza vaccines in the US that come in single-dose vials or prefilled syringes. Some influenza vaccines are also available as a multi-dose vial, and in that form contain thiomersal (24.5 micrograms of mercury).
Phenol 0.25% v/v is used in Pneumovax 23, a pneumococcal polysaccharide vaccine, and in the smallpox vaccine. However, phenol reduces the potency of diphtheria and tetanus toxoid-containing vaccines. Similarly, thiomersal weakens the immunogenicity of the inactivated poliovirus vaccine, so the IPOL vaccine contains 2–3 microlitres of 2-phenoxyethanol instead.
Stabilisers
Stabilisers protect the vaccine from the effects of temperature and ensure it does not degrade in storage. For vaccines that are freeze-dried, they provide a necessary bulk. Without them, the vaccine powder would be invisibly tiny (ranging from nanograms to a few tens of micrograms) and stick to the vial glass. Stabilisers used for vaccines include sugars (sucrose, lactose), sorbitol, amino acids (glycine, monosodium glutamate) and proteins (hydrolysed gelatin). There have very rarely (one in twomillion vaccinations) been cases of allergic reaction to the proteins in gelatin. The source of gelatin, pork, is of religious concern to Jewish and Muslim communities, though some leaders have ruled this is not a cause to reject vaccines that are injected or inhaled rather than ingested. There are alternatives for some vaccines that contain gelatine.
Acidity regulators such as phosphate salts keep the pH within a required range during manufacture and in the final product. Other salts help ensure the vaccine is isotonic with body fluids.
Manufacturing residuals
There are materials that serve no function in the final vaccine but are left over from the manufacturing process. Bacteria and viruses may be inactivated using formaldehyde. The quantity remaining in diphtheria or tetanus toxoid vaccines licensed in the US is required to be less than 0.1 milligrams (0.02%). Although formaldehyde has potentially toxic and carcinogenic properties in large doses, it is present in the blood (due to natural biochemical processes) at much higher concentrations than permitted in vaccines. Alternatives used in some vaccines include glutaraldehyde and β-propiolactone. Antibiotics may be used to prevent bacteria growing during vaccine manufacture and traces of these may remain. Antibiotics that some people are allergic to (such as cephalosporins, penicillins and sulphonamides) are not used. Those that are used include kanamycin, gentamicin, neomycin, polymyxin B, and streptomycin.
A small amounts of protein may remain from the material used to grow viruses, to which some people may be hypersensitive. Some influenza and yellow fever vaccines are grown in chicken eggs, and measles or mumps vaccines may be grown in chick embryo cell culture. Engerix-B, a recombinant DNA vaccine for hepatitisB is produced in yeast and may contain up to five percent yeast protein. Cervarix, an HPV vaccine, is grown in a cell line from the cabbage looper moth. The amount of insect protein remaining is less than forty nanograms.
Some components of the vaccine vial or syringe may contain latex rubber. This is a problem for those with a severe allergic reaction to latex, but not for those who get contact dermatitis after wearing latex gloves.
Notes
References
Works cited
External links
Vaccine ingredients from the Oxford Vaccine Group.
Vaccine Excipient Summary from the Centers for Disease Control and Prevention (CDC).
Vaccine ingredients from Full Fact.
Vaccination
Drug manufacturing
Excipients
Vaccines | Vaccine ingredients | [
"Biology"
] | 2,942 | [
"Vaccination",
"Vaccines"
] |
70,594,036 | https://en.wikipedia.org/wiki/Laurie%20E.%20Locascio | Laurie Ellen Locascio (born November 21, 1961) is an American biomedical engineer, analytical chemist, and president and CEO of the American National Standards Institute (ANSI). She was formerly the under secretary of commerce for standards and technology and the 17th director of National Institute of Standards and Technology from 2022 to 2024. From 2017 to 2021, Locascio was vice president for research of University of Maryland, College Park and University of Maryland, Baltimore.
Early life
Locascio was born November 21, 1961, in Cumberland, Maryland. Her father was a physicist at the Allegany Ballistics Laboratory. He fostered her interest in science. She attended Bishop Walsh High School. In 1977, she was awarded an educational development certificate. Locascio had an early interest in biology and won her school's senior science award. She graduated in 1979.
Education and early career
Locascio attended James Madison University from 1979 to 1983 where she earned her B.Sc. in chemistry with a minor in biochemistry. In 1982, Locascio was a research assistant in the department of chemistry at West Virginia University. She attended the University of Utah from 1983 to 1986 while working as a research assistant in the department of bioengineering. Locascio completed her M.Sc. in bioengineering in 1986.
From 1986 to 1999, Locascio was a research biomedical engineer in the molecular spectroscopy and microfluidic methods group in the analytical chemistry division of the National Institute of Standards and Technology (NIST). She received a certificate of recognition from the United States Department of Commerce in 1987, 1989, and 1990. Locascio was awarded the Department of Commerce Bronze Medal in 1991. While working at NIST, she was encouraged by her manager Willie E. May and mentor Richard Durst to pursue a doctoral degree. From 1995 to 1999, Locascio completed a Ph.D. in toxicology at the University of Maryland School of Medicine. At the University of Maryland, Katherine S. Squibb and Bruce O. Fowler, the director of the toxicology program, supported Locascio's efforts to attend graduate school while also working at NIST. Her dissertation was titled Miniaturization of bioassays for analytical toxicology. Cheng S. Lee was her doctoral advisor and Mohyee E. Eldefrawi served on her advisory committee.
Career
Locascio is an interdisciplinary researcher. She worked at NIST for 31 years, rising from a research biomedical engineer to eventually leading the agency's material measurement laboratory. Locascio also served as the acting associate director for laboratory programs, the number two position at NIST, providing direction and operational guidance for NIST's lab research programs across two campuses in Gaithersburg, Maryland, and Boulder, Colorado. She received the 2017 American Chemical Society Earle B. Barnes Award for Leadership in Chemical Research Management, and the 2017 Washington Academy of Sciences Special Award in Scientific Leadership. Locascio has published 115 scientific papers and has received 12 patents in the fields of bioengineering and analytical chemistry. During her time at NIST, she received the Department of Commerce Silver Medal, American Chemical Society Division of Analytical Chemistry Arthur F. Findeis Award, the NIST Safety Award and the NIST Applied Research Award. Locascio is also a fellow of the American Chemical Society and the American Institute for Medical and Biological Engineering.
In late 2017, Locascio joined University of Maryland's faculty. She was the first person to serve as the vice president for research of both the College Park and Baltimore campuses. In this role, Locascio oversaw the University of Maryland's research and innovation enterprise at these two campuses, which garner a combined $1.1 billion in external research funding each year. Within Locascio's purview was the development of large interdisciplinary research programs, technology commercialization, innovation and economic development efforts, and strategic partnerships with industry, federal, academic, and nonprofit collaborators. She also served as a professor in the Fischell Department of Bioengineering at the A. James Clark School of Engineering with a secondary appointment in the department of pharmacology in the School of Medicine. In 2021, Locascio inducted as a fellow of the National Academy of Inventors. At the University of Maryland that same year, she was succeeded by interim vice president Amitabh Varshney.
On July 16, 2021, President Joe Biden nominated Locascio as the under secretary of commerce for standards and technology. She was confirmed by the Senate on April 7, 2022. On April 19, 2022, Locascio was sworn in by U.S. secretary of commerce Gina Raimondo. She was the fourth Under Secretary of Commerce for Standards and Technology and 17th director of NIST. Locascio was the third female head of NIST. She resigned her governmental positions on December 31, 2024.
In January 2025, she assumed the role of president and CEO of the American National Standards Institute (ANSI).
References
Citations
Bibliography
External links
1961 births
Living people
21st-century American chemists
21st-century American engineers
21st-century American women scientists
American biomedical engineers
American women chemists
American women engineers
Analytical chemists
Engineers from Maryland
Fellows of the National Academy of Inventors
Fellows of the American Chemical Society
Fellows of the American Institute for Medical and Biological Engineering
James Madison University alumni
NIST Directors
People from Cumberland, Maryland
Scientists from Maryland
Under Secretaries of Commerce for Standards and Technology
University of Maryland School of Medicine alumni
University of Maryland, Baltimore faculty
University of Maryland, College Park administrators
University of Maryland, College Park faculty
University of Utah alumni
American women academic administrators
Biden administration personnel | Laurie E. Locascio | [
"Chemistry"
] | 1,154 | [
"Analytical chemists"
] |
70,596,336 | https://en.wikipedia.org/wiki/Kaniadakis%20statistics | Kaniadakis statistics (also known as κ-statistics) is a generalization of Boltzmann–Gibbs statistical mechanics, based on a relativistic generalization of the classical Boltzmann–Gibbs–Shannon entropy (commonly referred to as Kaniadakis entropy or κ-entropy). Introduced by the Greek Italian physicist Giorgio Kaniadakis in 2001, κ-statistical mechanics preserve the main features of ordinary statistical mechanics and have attracted the interest of many researchers in recent years. The κ-distribution is currently considered one of the most viable candidates for explaining complex physical, natural or artificial systems involving power-law tailed statistical distributions. Kaniadakis statistics have been adopted successfully in the description of a variety of systems in the fields of cosmology, astrophysics, condensed matter, quantum physics, seismology, genomics, economics, epidemiology, and many others.
Mathematical formalism
The mathematical formalism of κ-statistics is generated by κ-deformed functions, especially the κ-exponential function.
κ-exponential function
The Kaniadakis exponential (or κ-exponential) function is a one-parameter generalization of an exponential function, given by:
with .
The κ-exponential for can also be written in the form:
The first five terms of the Taylor expansion of are given by:where the first three are the same as a typical exponential function.
Basic properties
The κ-exponential function has the following properties of an exponential function:
For a real number , the κ-exponential has the property:
.
κ-logarithm function
The Kaniadakis logarithm (or κ-logarithm) is a relativistic one-parameter generalization of the ordinary logarithm function,
with , which is the inverse function of the κ-exponential:
The κ-logarithm for can also be written in the form:
The first three terms of the Taylor expansion of are given by:
following the rule
with , and
where and . The two first terms of the Taylor expansion of are the same as an ordinary logarithmic function.
Basic properties
The κ-logarithm function has the following properties of a logarithmic function:
For a real number , the κ-logarithm has the property:
κ-Algebra
κ-sum
For any and , the Kaniadakis sum (or κ-sum) is defined by the following composition law:
,
that can also be written in form:
,
where the ordinary sum is a particular case in the classical limit : .
The κ-sum, like the ordinary sum, has the following properties:
The κ-difference is given by .
The fundamental property arises as a special case of the more general expression below:
Furthermore, the κ-functions and the κ-sum present the following relationships:
κ-product
For any and , the Kaniadakis product (or κ-product) is defined by the following composition law:
,
where the ordinary product is a particular case in the classical limit : .
The κ-product, like the ordinary product, has the following properties:
The κ-division is given by .
The κ-sum and the κ-product obey the distributive law: .
The fundamental property arises as a special case of the more general expression below:
Furthermore, the κ-functions and the κ-product present the following relationships:
κ-Calculus
κ-Differential
The Kaniadakis differential (or κ-differential) of is defined by:
.
So, the κ-derivative of a function is related to the Leibniz derivative through:
,
where is the Lorentz factor. The ordinary derivative is a particular case of κ-derivative in the classical limit .
κ-Integral
The Kaniadakis integral (or κ-integral) is the inverse operator of the κ-derivative defined through
,
which recovers the ordinary integral in the classical limit .
κ-Trigonometry
κ-Cyclic Trigonometry
The Kaniadakis cyclic trigonometry (or κ-cyclic trigonometry) is based on the κ-cyclic sine (or κ-sine) and κ-cyclic cosine (or κ-cosine) functions defined by:
,
,
where the κ-generalized Euler formula is
.:
The κ-cyclic trigonometry preserves fundamental expressions of the ordinary cyclic trigonometry, which is a special case in the limit κ → 0, such as:
.
The κ-cyclic tangent and κ-cyclic cotangent functions are given by:
.
The κ-cyclic trigonometric functions become the ordinary trigonometric function in the classical limit .
κ-Inverse cyclic function
The Kaniadakis inverse cyclic functions (or κ-inverse cyclic functions) are associated to the κ-logarithm:
,
,
,
.
κ-Hyperbolic Trigonometry
The Kaniadakis hyperbolic trigonometry (or κ-hyperbolic trigonometry) is based on the κ-hyperbolic sine and κ-hyperbolic cosine given by:
,
,
where the κ-Euler formula is
.
The κ-hyperbolic tangent and κ-hyperbolic cotangent functions are given by:
.
The κ-hyperbolic trigonometric functions become the ordinary hyperbolic trigonometric functions in the classical limit .
From the κ-Euler formula and the property the fundamental expression of κ-hyperbolic trigonometry is given as follows:
κ-Inverse hyperbolic function
The Kaniadakis inverse hyperbolic functions (or κ-inverse hyperbolic functions) are associated to the κ-logarithm:
,
,
,
,
in which are valid the following relations:
,
,
.
The κ-cyclic and κ-hyperbolic trigonometric functions are connected by the following relationships:
,
,
,
,
,
,
,
.
Kaniadakis entropy
The Kaniadakis statistics is based on the Kaniadakis κ-entropy, which is defined through:
where is a probability distribution function defined for a random variable , and is the entropic index.
The Kaniadakis κ-entropy is thermodynamically and Lesche stable and obeys the Shannon-Khinchin axioms of continuity, maximality, generalized additivity and expandability.
Kaniadakis distributions
A Kaniadakis distribution (or κ-distribution) is a probability distribution derived from the maximization of Kaniadakis entropy under appropriate constraints. In this regard, several probability distributions emerge for analyzing a wide variety of phenomenology associated with experimental power-law tailed statistical distributions.
κ-Exponential distribution
κ-Gaussian distribution
κ-Gamma distribution
κ-Weibull distribution
κ-Logistic distribution
Kaniadakis integral transform
κ-Laplace Transform
The Kaniadakis Laplace transform (or κ-Laplace transform) is a κ-deformed integral transform of the ordinary Laplace transform. The κ-Laplace transform converts a function of a real variable to a new function in the complex frequency domain, represented by the complex variable . This κ-integral transform is defined as:
The inverse κ-Laplace transform is given by:
The ordinary Laplace transform and its inverse transform are recovered as .
Properties
Let two functions and , and their respective κ-Laplace transforms and , the following table presents the main properties of κ-Laplace transform:
The κ-Laplace transforms presented in the latter table reduce to the corresponding ordinary Laplace transforms in the classical limit .
κ-Fourier Transform
The Kaniadakis Fourier transform (or κ-Fourier transform) is a κ-deformed integral transform of the ordinary Fourier transform, which is consistent with the κ-algebra and the κ-calculus. The κ-Fourier transform is defined as:
which can be rewritten as
where and . The κ-Fourier transform imposes an asymptotically log-periodic behavior by deforming the parameters and in addition to a damping factor, namely .
The kernel of the κ-Fourier transform is given by:
The inverse κ-Fourier transform is defined as:
Let , the following table shows the κ-Fourier transforms of several notable functions:
The κ-deformed version of the Fourier transform preserves the main properties of the ordinary Fourier transform, as summarized in the following table.
The properties of the κ-Fourier transform presented in the latter table reduce to the corresponding ordinary Fourier transforms in the classical limit .
See also
Giorgio Kaniadakis
Kaniadakis distribution
Kaniadakis κ-Exponential distribution
Kaniadakis κ-Gaussian distribution
Kaniadakis κ-Gamma distribution
Kaniadakis κ-Weibull distribution
Kaniadakis κ-Logistic distribution
Kaniadakis κ-Erlang distribution
References
External links
Giorgio Kaniadakis Google Scholar page
Kaniadakis Statistics on arXiv.org
Statistical mechanics | Kaniadakis statistics | [
"Physics"
] | 1,819 | [
"Statistical mechanics"
] |
70,600,204 | https://en.wikipedia.org/wiki/Silicon%20isotope%20biogeochemistry | Silicon isotope biogeochemistry is the study of environmental processes using the relative abundance of Si isotopes. As the relative abundance of Si stable isotopes varies among different natural materials, the differences in abundance can be used to trace the source of Si, and to study biological, geological, and chemical processes. The study of stable isotope biogeochemistry of Si aims to quantify the different Si fluxes in the global biogeochemical silicon cycle, to understand the role of biogenic silica within the global Si cycle, and to investigate the applications and limitations of the sedimentary Si record as an environmental and palaeoceanographic proxy.
Background
Silicon in nature is typically bonded to oxygen, in a tetravalent oxidation state. The major forms of solid Si are silicate minerals and amorphous silica, whereas in aqueous solutions the dominant forms are orthosilicic acid and its dissociated species. There are three stable isotopes of Si, associated with the following mean natural abundances: 28Si– 92.23%, 29Si– 4.67%, and 30Si– 3.10%. The isotopic composition of Si is often formulated by the delta notation, as the following:
The reference material (standard) for defining the δ30Si of a sample is the National Bureau of Standards (NBS) 28 Sand Quartz, which has been certified and distributed by the National Institute of Standards and Technology (NIST), and is also named NIST RM 8546. Currently, there are four main analytical methods for the measurement of Si isotopes: Gas Source Isotope-Ratio Mass Spectrometry (GC-IRMS), Secondary Ion Mass Spectrometry (SIMS), Multi-Collector Inductively Coupled Plasma Mass Spectrometry (MC–IPC–MS), and Laser Ablation MC–ICP–MS.
Si isotopes in the Si biogeochemical cycle
Primary minerals and weathering
Primary minerals are the minerals that crystalize during the formation of Earth's crust, and their typical δ30Si isotopic value is in the range of −0.9‰ – +1.4‰. Earth's crust is constantly undergoing weathering processes, which dissolve Si and produce secondary Si minerals simultaneously. The formation of secondary Si discriminates against the heavy Si isotope (30Si), creating minerals with relatively low δ30Si isotopic values (−3‰ – +2.5‰, mean: −1.1‰). It has been suggested that this isotopic fractionation is controlled by the kinetic isotope effect of Si adsorption to Aluminum hydroxides, which takes place in early stages of weathering. As a result of incorporation of lighter Si isotopes into secondary minerals, the remaining dissolved Si will be relative enriched in the heavy Si isotope (30Si), and associated with relatively high δ30Si isotopic values (−1‰ – +2‰, mean: +0.8‰). The dissolved Si is often transported by rivers to the oceans.
Terrestrial vegetation
Silicon uptake by plants typically discriminates against the light Si isotope, forming 30Si-enriched plants (δ30Si of 0–6‰). The reason for this relatively large isotopic fractionation remains unclear, mainly because the mechanisms of Si uptake by plants are yet to be understood. Silicon in plants can be found in the xylem, which is associated with exceptionally high δ30Si values. Phytoliths, microscopic structures of silica in plant tissues, have relatively lower δ30Si values. For example, it was reported that the mean δ30Si of phytoliths in various wheat organs were -1.4–2.1‰, which is lower than the typical range for vegetation (δ30Si of 0–6‰). Phytoliths are relatively soluble, and as plants decay they contribute to the terrestrial dissolved Si budget.
Biomineralization in aquatic environments
In aquatic environments (rivers, lakes and ocean), dissolved Si is utilized by diatoms, dictyochales, radiolarians and sponges to produce solid bSiO2 structures. The biomineralized silica has an amorphous structure and therefore its properties may vary among the different organisms. Biomineralization by diatoms induces the largest Si flux within the ocean, and thus it has a crucial role in the global Si cycle. During Si uptake by diatoms, there is an isotopic discrimination against the heavy isotope, forming 30Si-depleted biogenic silica minerals. As a result, the remaining dissolved Si in the surrounding water is 30Si-enriched. Since diatoms rely on sunlight for photosynthesis, they inhabit in surface waters, and thus the surface water of the ocean are typically 30Si-enriched. Although there is less available data on the isotopic fractionation during biomineralization by radiolarians, it has been suggested that radiolarians also discriminate against the heavy isotope (30Si), and that the magnitude of isotopic fractionation is of a similar range as biomineralization by diatoms. Sponges also show an isotopic preference for 28Si over 30Si, but the magnitude of their isotopic fractionation is often larger (For quantitative comparation, see Figure 2).
Hydrothermal vents
Hydrothermal vents contribute dissolved Si to the ocean Si reservoir. Currently, it is challenging to determine the magnitude of hydrothermal Si fluxes, due to lack of data on the δ30Si values associated with this flux. There are only two published data points of the δ30Si value of hydrothermal vents (−0.4‰ and −0.2‰).
Diagenesis
The δ30Si value of sediment porewater may be affected by post-depositional (diagenetic) precipitation or dissolution of Si. It is important to understand the extent and isotopic fractionations of these processes, as they alter the δ30Si values of the originally deposited sediments, and determine the δ30Si preserved in the rock record. Generally, precipitation of Si prefers the light isotope (28Si) and leads to 30Si-enriched dissolved Si in the hosting solution. The isotopic effect of Si dissolution in porewater is yet to be clear, as some studies report a preference for 28Si during dissolution, while other studies document that isotopic fractionation was not expressed during dissolution of sediments.
Paleoceanography proxies
The silicic acid leakage hypothesis
The silicic acid leakage hypothesis (SALH) is a suggested mechanism that aims to explain the atmospheric CO2 variations between glacial and interglacial periods. This hypothesis proposes that during glacial periods, as a result of enhanced dust deposition in the southern ocean, diatoms consume less Si relative to nitrogen. The decrease in the Si:N uptake ratios leads to Si excess in the southern ocean, which leaks to lower latitudes of the ocean that are dominated by coccolithophores. As the Si concentrations rise, the diatom population may outcompete the coccolithophores, reducing the CaCO3 precipitation and altering ocean alkalinity and the carbonate pump. These changes would induce a new ocean-atmosphere steady state with lower atmospheric CO2 concentrations, consistent with the draw down of CO2 observed in the last glacial period. The δ30Si and δ15N isotopic values archived in the southern ocean diatom sediments has been used to examine this hypothesis, as the dynamics of Si and N supply and utilization during the last deglaciation could be interpreted from this record. In alignment with the silicic acid leakage hypothesis, these isotopic archives suggest that Si utilization in the southern ocean increased during the deglaciation.
Si isotope palaeothermometry
There have been attempts to reconstruct ocean paleotemperatures by chert Si isotopic record, which proposed that the Archean seawater temperatures were significantly higher than modern (~70 °C). However, subsequent studies question this palaeothermometry method and offer alternative explanation for the δ30Si values of Archean rocks. These signals could result from diagenetic alteration processes that overprint the original δ30Si values, or reflect that Archean cherts were composed of different Si sources. It is plausible that in during the Archean the dominant sources of Si sediments were weathering, erosion, silicification of clastic sediments or hydrothermal activity, in contrast to the vast SiO2 biomineralization in the modern ocean.
Paleo Si concentrations
According to empirical calibrations, the difference in δ30Si (denoted as Δ30Si) between sponges and their hosting water is correlated with the Si concentration of the hosting solution. Therefore, it has been suggested that the Si concentrations in bottom waters of ancient oceans can be interpreted from the δ30Si of coexisting sponge spicules, which are preserved in the rock record. It has been proposed that this relation is determined by the growth rate and the Si uptake kinetics of sponges, but the current understanding of sponge biomineralization pathways is limited. Although the mechanism behind this relation is yet to be clear, it appears consistent among various laboratory experiments, modern environments, and core top sediments. However, there is also evidence that the δ30Si of carnivorous sponges may differ significantly from the expected correlation.
See also
Isotopes of silicon
Isotope geochemistry
Stable isotope ratio
Isotope-ratio mass spectrometry
References
Geochemistry
Biochemistry
Biomineralization
Weathering | Silicon isotope biogeochemistry | [
"Chemistry",
"Biology"
] | 1,986 | [
"Biochemistry",
"nan",
"Bioinorganic chemistry",
"Biomineralization"
] |
70,603,054 | https://en.wikipedia.org/wiki/Stress%20distribution%20in%20soil | Stress distribution in soil is a function of the type of soil, the relative rigidity of the soil and the footing, and the depth of foundation at level of contact between footing and soil. The estimation of vertical stresses at any point in a soil mass due to external loading is essential to the prediction of settlements of buildings, bridges and pressure.
References
Hydrology
Hydraulic engineering
Soil mechanics
Soil physics | Stress distribution in soil | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 79 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Soil physics",
"Soil mechanics",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Environmental engineering",
"Hydraulic engineering"
] |
70,605,254 | https://en.wikipedia.org/wiki/Indirect%20detection%20of%20dark%20matter | Indirect detection of dark matter is a method of searching for dark matter that focuses on looking for the products of dark matter interactions (particularly Standard Model particles) rather than the dark matter itself. Contrastingly, direct detection of dark matter looks for interactions of dark matter directly with atoms. There are experiments aiming to produce dark matter particles using colliders. Indirect searches use various methods to detect the expected annihilation cross sections for weakly interacting massive particles (WIMPs). It is generally assumed that dark matter is stable (or has a lifetime long enough to appear stable), that dark matter interacts with Standard Model particles, that there is no production of dark matter post-freeze-out, and that the universe is currently matter-dominated, while the early universe was radiation-dominated. Searches for the products of dark matter interactions are profitable because there is an extensive amount of dark matter present in the universe, and presumably, a lot of dark matter interactions and products of those interactions (which are the focus of indirect detection searches); and many currently operational telescopes can be used to search for these products. Indirect searches help to constrain the annihilation cross section the lifetime of dark matter , as well as the annihilation rate.
Dark matter interactions
Indirect detection relies on the products of dark matter interactions. Thus, there are several different models of dark matter interactions to consider. Dark matter (DM) is often considered stable, as a lifetime greater than the age of the universe is required ( yrs) for large amounts of DM to be present today. In fact, it seems that the abundance of DM has not changed significantly while the universe has been matter-dominated. Using measurements of the CMB and other large scale structures, the lifetime of DM can be roughly constrained by s. Thus, annihilating DM is the focus of most indirect searches.
Annihilating dark matter
An annihilation cross section on the order of is consistent with the measured cosmological density of DM. Thus, the objects of indirect searches are the secondary products that are expected from the annihilation of two dark matter particles. When observations of those secondary products reveal cross sections on the order of the expected (or near that order of magnitude, with some expected or known discrepancy) the source of those products may become a dark matter candidate, or an indication of dark matter (an indirect signal). In general, the DM is expected to be for the cross section given above.
Note that the "J-factor" of a given potential source of dark matter interaction products is the energy spectrum integrated along the line of sight, taking only the term dependent on the distribution of the DM mass density. For annihilation, that J-factor is commonly given as,
where is the mass density of DM. The J-factor is essentially a predictive measurement of a potential annihilation signal. The J-factor depends on the density, so if the density of a given region is not well-known or well-defined, then it can be difficult to determine the size of the expected signal. For example, since it is difficult to distinguish and remove backgrounds near the galactic center the calculated J-factor for that region varies by several orders of magnitude, depending on the density profile used.
Decaying dark matter
However, if DM is unstable, it would decay and produce decay products that could be observed. Since decay only involves one DM particle (while annihilation requires two), the flux of DM decay products is proportional to the DM density, , rather than in the case of annihilation. There have been efforts to search for DM decay products in gamma rays, X-rays, cosmic rays, and neutrinos. For unstable dark matter of mass in the GeV–TeV range, the decay products are high-energy photons. These photons contribute to the extragalatic gamma ray background (EGRB). Studies of the EGRB using the Fermi satellite have revealed constraints on the lifetime of dark matter as s, for masses between about 100 GeV and 1 TeV. The constraints derived from the EGRB are relatively unaffected by additional astrophysical uncertainties. NuSTAR observations have been used to search for X-ray lines to further constrain decaying DM for masses in the 10 to 50 keV range. For sterile neutrinos, there are several existing constraints based on X-ray limits. For DM masses keV and keV, there are well-defined constraints on the mixing angle, . Neutrinos have been used to derive constraints for DM masses in the range GeV. Combined data from Fermi gamma-ray observations and IceCube neutrino observations give constraints depending on energy and defined by the criterion, , with defined as the given signal, as the muon neutrino background, and as the Gaussian significance. For low energies, the constraints improve with time as . For high energies, the constraints are not well-defined, as neutrino flux is no longer dominant. Thus, there are constraints on the properties of decaying DM for masses ranging from keV to TeV. Additionally, in the case of decay, the signal strength (like J-factor for the case of annihilation) is dependent only on density, rather than density squared: . For sufficiently distant sources, the signal strength can then be approximated as , where is the source mass.
Methods of indirect detection
There are currently many different avenues through which indirect searches for dark matter may be carried out. In general, indirect detection searches focus on either gamma-rays, cosmic-rays, or neutrinos. There are many instruments that have been used in efforts to detect dark matter annihilation products, including H.E.S.S., VERITAS, and MAGIC (Cherenkov telescopes), Fermi Large Area Telescope (LAT), High Altitude Water Cherenkov Experiment (HAWC), and Antares, IceCube, and SuperKamiokande (neutrino telescopes). Each of these telescopes participates in the search for a signal from WIMPs, focusing, respectively, on sources ranging from the Galactic center or galactic halo, to galaxy clusters, to dwarf galaxies, depending on allowable energy range for each instrument. A DM annihilation signal has not yet been confirmed, and instead, constraints are placed on DM particles through limits on the annihilation cross section of WIMPs, on the lifetime of dark matter (in the case of decay), as well as on the annihilation rate and flux.
WIMP annihilation limits
Gamma-ray searches
In order to detect or constrain the properties of dark matter, observations of dwarf galaxies have been carried out. Limits may be placed on the annihilation cross section of WIMPs based on analysis of either gamma-rays or cosmic rays. The VERITAS, MAGIC, Fermi, and H.E.S.S. telescopes are among those that have been involved in the observation of gamma-rays. The air Cherenkov telescopes (H.E.S.S., MAGIC, VERITAS) are most effective at constraining the annihilation cross section for high energies ( GeV).
For energies below 100 GeV, Fermi is more effective, as this telescope is not constrained to a view of only a small portion of the sky (as the ground-based telescopes are). From six years of Fermi data, which observed dwarf galaxies in the Milky Way, the DM mass is constrained to GeV (masses both this threshold are not allowed). Then, combining data from Fermi and MAGIC, the upper limit of the cross section is found to be (that is, with no uncertainties in . This collaboration produced constraints for DM masses in the range . Note that Fermi data dominates for the low mass end of the range, while MAGIC dominates for the high masses.
VERITAS has been used to observe high energy gamma-rays in the range 85 GeV to 30 TeV, for the mass range .
Cosmic-ray searches
Cosmic ray analyses primarily observe positrons and antiprotons. The AMS experiment is one such project, providing data on cosmic ray electrons and positrons in the 0.5 GeV to 350 GeV range. AMS data allows for constraints on DM masses GeV. Results from AMS constrain the annihilation cross section to for DM masses GeV (with the thermally averaged cross section noted as ). The upper limit for the annihilation cross section can also be used to find a limit for the decay width of a DM particle. These analyses are also subject to substantial uncertainty, particularly pertaining to the Sun's magnetic field, as well as the production cross section for antiprotons.
Galactic center
The galactic center is hypothesized to be a source of large amounts of dark matter annihilation products. However, the background at the galactic center is both bright and not yet well understood (based on the model of the Milky Way in use, the flux of annihilation products can vary by several orders of magnitude). The Galactic center is a unique source of high mass dark matter, which cannot be replicated in colliders. Thus, telescopes like Fermi and H.E.S.S. have observed the excess of gamma-rays coming from the galactic center, as backgrounds are lower for gamma-rays (and unknown backgrounds at the galactic center typically cause large uncertainties for dark matter searches). The annihilation cross section is consistent with the expected , and thus, In the case that those excess gamma-rays are products of dark matter annihilation, they must originate from dark matter with a mass .
H.E.S.S., an imaging atmospheric Cherenkov telescope, has been used to observe this excess of very high energy gamma-rays emanating from the galactic center. Probing energies in the range GeV to TeV, H.E.S.S. data allowed for limits on internal bremsstrahlung processes to be determined, which then allowed for upper limits on DM annihilation flux to be defined.
Overall, the galactic center is a focus for indirect searches due to its excess of gamma-rays. That excess has which is on the order of the thermally averaged annihilation cross section, making the gamma-ray excess a potential dark matter candidate.
Heavy dark matter
Heavy DM has . Dark matter with mass in this regime is expected to result in high-energy photons that, through pair production, create a cascade of electrons and photons, eventually leading to low energy gamma-rays. Those low-energy gamma-rays can be observed by telescopes like Fermi, and then constrain the annihilation rate accordingly. Additionally, for decaying DM with masses greater than the TeV range, the lifetime is constrained to s.
Light dark matter
Contrastingly, light DM has , and it becomes difficult to observe products for these lower masses and energies. Fermi is limited by its angular resolution, and cannot observe products below . To observe products at the lower mass limit, either a low-energy gamma-ray telescope or an X-ray telescope is required.
Cosmic ray positron excess
An excess of positrons (in the flux ratio of positrons to electron and positron pairs) was found by PAMELA, in observing cosmic rays. Fermi and AMS-02 later confirmed this excess. One possible explanation for this excess of positrons is annihilating dark matter. For energies GeV to GeV, the ratio of positrons to electron-positron pairs continues to increase, indicating that the annihilating dark matter is producing positrons (and the flux increases with the DM mass). There are alternative explanations for this excess of positrons, including pulsars or supernova remnants. In 2017, data from the HAWC Collaboration indicated that the increase in flux of positrons from the two nearest pulsars (Geminga and Monogem) is roughly equivalent to the excess originally observed by PAMELA.
The 3.5 keV line
In 2014, a spectral line an energy of keV was found in the observation of galaxy clusters. Further investigation of this spectral line by Chandra and XMM-Newton failed to find such a line, and thus, there is debate about whether the spectral line is evidence of dark matter. There are several explanations: (1) the source is a decaying sterile neutrino, with a mass keV (cold dark matter), and thus, is not subject to the constraints on warm dark matter. This explanation is consistent with observation of the spectral line at 3.5 keV, as expected, in both the cosmic X-ray background and the Galactic center, but inconsistent with the results from Chandra and XMM-Newton; (2) the source is heavier than 3.5 keV, but has a "metastable excited state" at 3.5 keV and a decay emits a photon of that same energy; (3) the DM source decays, producing a 3.5 keV axion-like particle, which could turn into a photon under some external magnetic field. The actual explanation cannot yet be confirmed. Thus, the 3.5 keV line remains as evidence of a potential DM candidate.
In 2023 a research preprint published on Arxiv questioned the existence of the 3.5 keV spectral line; the authors of the research, when trying to replicate the results pointing to the existence of the 3.5 keV spectral line failed to reproduce these results in five out of six cases, leading them to conclude:"We conclude that there is little robust evidence for the existence of the 3.5 keV line".
Cosmic microwave background
The cosmic microwave background (CMB) can also be analyzed in order to constrain dark matter annihilation products. If the number of dark matter annihilations is given as,
where is the expansion rate, is the comoving volume, and is the averaged annihilation cross section, then the number of dark matter annihilations during both the period of matter-radiation equality and matter domination can be determined. From the above equation for number of dark matter annihilations, and based on a typical dark matter mass of GeV, that dark matter would ionize a significant portion of hydrogen atoms (~) at the time of recombination. Thus, dark matter would have a noticeable effect on the CMB, as observed today.
Because the anisotropies found in the CMB are sensitive to any increase in energy, those anisotropies can be calculated under the assumption that the energy increase is due to some DM annihilation, in an effort to determine constraints on that DM annihilation. The Planck Collaboration used the relation
(where is the energy released into the intergalactic medium by a DM annihilation process) to determine a parameter, , to constrain DM annihilations based on CMB anisotropies and polarization. The Planck Collaboration found that CMB-constraints were more reliable than other methods for smaller masses (below ~10 GeV). CMB-constraints are also most reliable for any DM annihilation that results in either protons or electrons (that is, excluding annihilation into neutrinos).
Alternative explanations
Some of the alternative explanations are mentioned in their respective sections above, but there are many alternative explanations for the sources various that are considered potential DM signal candidates. For example, the excess of gamma-rays at the galactic center could be due to pulsars near the galactic center, rather than dark matter. Additionally, as previously mentioned, the excess of cosmic-ray positrons could be due to nearby pulsars increasing the flux of positrons.
It should also be noted that it is possible for dark matter to annihilate with a cross section smaller than the thermally averaged value of , but current instrumentation does not allow for the investigation of such a model. Some of those additional models include velocity dependent processes, in which the cross section scales with the square of the relative velocity () of the two annihilating dark matter particles . Another model is that of resonant annihilations, in which dark matter is assumed to annihilate near resonance, causing the cross section at the time of freeze-out to be significantly higher (or lower) than is observed today (due to the increased velocity at resonance, and the relatively low velocity assumed at present). Asymmetric dark matter is a model that suggests a primordial asymmetry in the abundance of dark matter particles and antiparticles.
References
Dark matter
Physics beyond the Standard Model
Observational cosmology | Indirect detection of dark matter | [
"Physics",
"Astronomy"
] | 3,484 | [
"Dark matter",
"Unsolved problems in astronomy",
"Concepts in astronomy",
"Unsolved problems in physics",
"Particle physics",
"Exotic matter",
"Physics beyond the Standard Model",
"Matter"
] |
54,923,730 | https://en.wikipedia.org/wiki/VOTCA | Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) is a Coarse-grained modeling package, which focuses on the analysis of molecular dynamics data, the development of systematic coarse-graining techniques as well as methods used for simulating microscopic charge (and exciton) transport in disordered semiconductors. It was originally developed at the Max Planck Institute for Polymer Research, and is now maintained by developers at the Max Planck Institute for Polymer Research, Los Alamos National Laboratory, Eindhoven University of Technology and the Beckman Institute for Advanced Science and Technology with contributions from researcher worldwide.
Features
VOTCA has 3 major parts, the Coarse-graining toolkit (VOTCA-CSG), the Charge Transport toolkit (VOTCA-CTP) and the Excitation Transport Toolkit (VOTCA-XTP). All of them are based on the VOTCA Tools library, which implements shared procedures.
Coarse-graining toolkit (VOTCA-CSG)
VOTCA-CSG supports a variety of different coarse-graining methods, incl. (iterative) Boltzmann Inversion, Inverse Monte Carlo, Force Matching (also known as the multiscale coarse-graining method) and the Relative entropy method and hybrid combinations of those as well as optimization-driven approaches, like simplex and CMA. To gather statistics VOTCA-CSG can use multiple molecular dynamics package incl. GROMACS, DL_POLY, ESPResSo, ESPResSo++, LAMMPS and HOOMD-blue for sampling.
Charge Transport toolkit (VOTCA-CTP)
VOTCA-CTP is a module, which does molecular orbital overlap calculations and can evaluate energetic disorder and electronic couplings needed to estimate charge transport properties.
Excitation Transport toolkit (VOTCA-XTP)
VOTCA-XTP is an extension to VOTCA-CTP, allowing to simulate excitation transport and properties. Therefore, it provides its own implementation of GW-BSE and a basic DFT implementation, employing localized basissets. Polarized QM/MM calculations for excited states are provided in the Thole framework. It features an interface to the Quantum Chemistry package ORCA for large scale production runs.
Release names
Major releases have names assigned to them:
1.1 SuperAnn
1.2 SuperDoris
1.3 SuperUzma
1.4 SuperKurt - in occasion of Kurt Kremer's 60th birthday
1.5 SuperVictor - named after Victor Rühle, one of the original core developers
1.6 SuperPelagia
1.6.2 SuperGitta
See also
GROMACS
MARTINI
OpenMM
References
Molecular modelling software
Molecular dynamics software
Los Alamos National Laboratory
Max Planck Institute for Polymer Research | VOTCA | [
"Chemistry"
] | 586 | [
"Molecular dynamics software",
"Molecular modelling software",
"Computational chemistry software",
"Molecular modelling",
"Molecular dynamics"
] |
54,935,667 | https://en.wikipedia.org/wiki/X-ray%20photon%20correlation%20spectroscopy | X-ray photon correlation spectroscopy (XPCS) in physics and chemistry, is a novel technique that exploits a coherent X-ray synchrotron beam to measure the dynamics of a sample. By recording how a coherent speckle pattern fluctuates in time, one can measure a time correlation function, and thus measure the timescale processes of interest (diffusion, relaxation, reorganization, etc.). XPCS is used to study the slow dynamics of various equilibrium and non-equilibrium processes occurring in condensed matter systems.
Advantages
XPCS experiments have the advantage of providing information of dynamical properties of materials (e.g. vitreous materials), while other experimental techniques can only provide information about the static structure of the material. This technique is based on the generation of a speckle pattern by the scattered coherent light originating from a material where some spatial inhomogeneities are present. A speckle pattern is a diffraction limited structure factor, and is typically observed when laser light is reflected from a rough surface, or from dust particles performing Brownian motion in air. The observation of speckle patterns with hard X-rays has just been demonstrated in the last few years. This observation is only possible now because of the development of new synchrotron radiation X-ray sources that can provide sufficient coherent flux.
aXPCS
A specific subgroup of these techniques is atomic-scale X-ray photon correlation spectroscopy (aXPCS).
References
Sources
P.-A. Lemieux, D.J. Durian Investigating non-Gaussian scattering processes by using nth-order intensity correlation functions Journal of the Optical Society of America 1999, 16(7), 1651–1664. doi: 10.1364/JOSAA.16.001651
Robert L. Leheny XPCS: Nanoscale motion and rheology Current Opinion in Colloid & Interface Science 2012, 17 (1), 3–12. doi: 10.1016/j.cocis.2011.11.002
Oleg G. Shpyrko X-ray photon correlation spectroscopy J. Synchrotron Radiation 2014, 21 (5), 1057–1064. doi: 10.1107/S1600577514018232
Sunil K. Sinha, Zhang Jiang, Laurence B. Lurio X-ray Photon Correlation Spectroscopy Studies of Surfaces and Thin Films Advanced Materials 2014, 26 (46), 7764–7785. doi: 10.1002/adma.201401094
Aurora Nogales, Andrei Fluerasu X Ray Photon Correlation Spectroscopy for the study of polymer dynamics European Polymer Journal 2016. doi: 10.1016/j.eurpolymj.2016.03.032
Synchrotron-related techniques
X-ray spectroscopy | X-ray photon correlation spectroscopy | [
"Physics",
"Chemistry"
] | 579 | [
"X-ray spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
54,939,754 | https://en.wikipedia.org/wiki/CLE%20peptide | CLE peptides (CLAVATA3/Embryo Surrounding Region-Related) are a group of peptides found in plants that are involved with cell signaling. Production is controlled by the CLE genes. Upon binding to a CLE peptide receptor in another cell, a chain reaction of events occurs, which can lead to various physiological and developmental processes. This signaling pathway is conserved in diverse land plants.
Background
Plants and animals alike both use small polypeptides for signaling in cell-to-cell communication. CLAVATA3/Embryo Surrounding Region-Related, also known as a plant peptide hormone, signaling is important for cell to cell signaling but also long distance communication. These two actions are especially important for plant cells because they are stationary and must perform cell expansion. In multicellular organisms cell-to-cell communication has been found to be very crucial for many growth processes that occur inside the organism. The 12 or 13 amino acid polypeptides are the mature forms of the CLE proteins that are derived from the conserved CLE domains. More and more CLE genes are being identified with more research being conducted in this area. CLE genes have not only been found in seed plants but also in lycophytes, bryophytes, and green algae.
Genes
Most research that has been conducted on CLE peptide signaling has been conducted with Arabidopsis, since this genome contains 32 members of the CLE gene family. CLV3 which belongs to the CLE family of genes is found within one or more tissues of Arabidopsis. All 32 members of the CLE family share two characteristics that include: encoding of a small protein with a putative secretion signal at their N- termini and contain a conserved CLE motif at or near their C-termini. The 32 members of the CLE gene family originated from mutations of the original gene.
Structures
CLE peptides are coded by the CLE genes. These peptides vary in structure with each peptide structure performing a different job with in the plant. The minimal length of functioning CLE peptides has been found to be 12 amino acids with several critical residues. There are two different peptide structures that are found within the plant and they are A-type and B-type. When A-type hormones are secreted the plant slows down the rate of root growth whereas the secretion of B-type peptides effects the vascular growth of the plant. The secretion of A-type peptides speeds up the vascular development of the plant that is mediated by the B-type peptides. This suggests that these two types of peptides work together to regulate the growth of the plant. The specific peptides are:
A-type peptides
CLE 1/3/4
CLE 2
CLE 5/6
CLE 7
CLE 8
CLE 9
CLE 10
CLE 11
CLE 12
CLE 13
CLE 14
CLE 16
CLE 17
CLE 18
CLE 19
CLE 20
CLE 21
CLE 22
CLE 25
CLE 26
CLE 27
CLE 40
CLE 45
B-type peptides
CLE 41/44/TDIF
CLE 42
CLE 43
CLE 46
Signaling in the shoot apical meristem
Meristematic cells give rise to various organs of the plant and keep the plant growing. There are two types of meristematic tissues 1) Apical Meristem 2) Lateral Meristem. The Apical Meristem is of two types; the shoot apical meristem (SAM) gives rise to organs like the leaves and flowers, while the root apical meristem (RAM) provides the meristematic cells for the future root growth. SAM and RAM cells divide rapidly and are considered indeterminate, in that they do not possess any defined end status. In that sense, the meristematic cells are frequently compared to the stem cells in animals, which have an analogous behavior and function. Within plants SAM cells play a major role in the overall growth and development, this is due to the fact that all cells making up the major parts of the plant come from the shoot apical meristem (SAM). There are three different important area found within the SAM and they include the central zone, the peripheral zone), and the rib meristem. Each of these areas play an important in the production of new stem cells within the SAM. All SAMs are usually dome shaped and have structures that are layered and are described as the tunica and corpus. CLV3 plays an important role in regulating the production of stem cells within the Central Zone region of the (SAM), this is also true for the cell promoting WUSCHEL (WUS) gene. The combination of these two genes regulates stem cell production by WUS negatively or positively regulating the production of stem cells by controlling the CLV3 gene.;
Genes in other plants
CLE genes have been found in numerous monocots, dicots, and even moss. Research has even shown that some plants like rice contain the presence of a multi-CLE domain. Various CLE-like genes have also been found in the genomes of plant-parasitic nematodes such as beet, soybean and potato cyst nematodes.
References
Further reading
Cell signaling
Peptides
Plant hormones | CLE peptide | [
"Chemistry"
] | 1,076 | [
"Biomolecules by chemical classification",
"Peptides",
"Molecular biology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.