id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
46,870,556 | https://en.wikipedia.org/wiki/Valence%20and%20conduction%20bands | In solid-state physics, the valence band and conduction band are the bands closest to the Fermi level, and thus determine the electrical conductivity of the solid. In nonmetals, the valence band is the highest range of electron energies in which electrons are normally present at absolute zero temperature, while the conduction band is the lowest range of vacant electronic states. On a graph of the electronic band structure of a semiconducting material, the valence band is located below the Fermi level, while the conduction band is located above it.
The distinction between the valence and conduction bands is meaningless in metals, because conduction occurs in one or more partially filled bands that take on the properties of both the valence and conduction bands.
Band gap
In semiconductors and insulators the two bands are separated by a band gap, while in conductors the bands overlap. A band gap is an energy range in a solid where no electron states can exist due to the quantization of energy. Within the concept of bands, the energy gap between the valence band and the conduction band is the band gap. Electrical conductivity of non-metals is determined by the susceptibility of electrons to be excited from the valence band to the conduction band.
Electrical conductivity
Semiconductor band structureSee electrical conduction and semiconductor for a more detailed description of band structure.
In solids, the ability of electrons to act as charge carriers depends on the availability of vacant electronic states. This allows the electrons to increase their energy (i.e., accelerate) when an electric field is applied. Similarly, holes (empty states) in the almost filled valence band also allow for conductivity.
As such, the electrical conductivity of a solid depends on its capability to flow electrons from the valence to the conduction band. Hence, in the case of a semimetal with an overlap region, the electrical conductivity is high. If there is a small band gap (Eg), then the flow of electrons from valence to conduction band is possible only if an external energy (thermal, etc.) is supplied; these groups with small Eg are called semiconductors. If the Eg is sufficiently high, then the flow of electrons from valence to conduction band becomes negligible under normal conditions; these groups are called insulators.
There is some conductivity in semiconductors, however. This is due to thermal excitation—some of the electrons get enough energy to jump the band gap in one go. Once they are in the conduction band, they can conduct electricity, as can the hole they left behind in the valence band. The hole is an empty state that allows electrons in the valence band some degree of freedom.
Band edge shifts of semiconductor nanoparticles
The edge shifting of size-dependent conduction and/or valence band is a phenomenon being studied in the field of semiconductor nanocrystals. The radius limit of occurrence of the semiconductor nanocrystal is the effective Bohr radius of the nanocrystal. The conduction and/or valence band edges shift to higher energy levels under this radius limit due to discrete optical transitions when semiconductor nanocrystal is restricted by the exciton. As a result of this edge shifting, the size of the conduction and/or valence band is decreased. This size-dependent edge shifting of conduction and/or valence band can provide plenty of useful information regarding the size or concentration of the semiconductor nanoparticles or band structures.
See also
Electrical conduction for more information about conduction in solids, and another description of band structure.
Fermi sea
HOMO/LUMO
Semiconductor for a full explanation of the band structure of materials.
Valleytronics
References
Citations
General references
External links
Direct Band Gap Energy Calculator
Electronic band structures | Valence and conduction bands | [
"Physics",
"Chemistry",
"Materials_science"
] | 787 | [
"Electron",
"Electronic band structures",
"Condensed matter physics"
] |
46,871,755 | https://en.wikipedia.org/wiki/Hole%20drilling%20method | The hole drilling method is a method for measuring residual stresses, in a material. Residual stress occurs in a material in the absence of external loads. Residual stress interacts with the applied loading on the material to affect the overall strength, fatigue, and corrosion performance of the material. Residual stresses are measured through experiments. The hole drilling method is one of the most used methods for residual stress measurement.
The hole drilling method can measure macroscopic residual stresses near the material surface. The principle is based on drilling of a small hole into the material. When the material containing residual stress is removed the remaining material reaches a new equilibrium state. The new equilibrium state has associated deformations around the drilled hole. The deformations are related to the residual stress in the volume of material that was removed through drilling. The deformations around the hole are measured during the experiment using strain gauges or optical methods. The original residual stress in the material is calculated from the measured deformations. The hole drilling method is popular for its simplicity and it is suitable for a wide range of applications and materials.
Key advantages of the hole drilling method include rapid preparation, versatility of the technique for different materials, and reliability. Conversely, the hole drilling method is limited in depth of analysis and specimen geometry, and is at least semi-destructive.
History and development
The idea of measuring the residual stress by drilling a hole and registering the change of the hole diameter was first proposed by Mathar in 1934. In 1966 Rendler and Vignis introduced a systematic and repeatable procedure of hole drilling to measure the residual stress. In the following period the method was further developed in terms of drilling techniques, measuring the relieved deformations, and the residual stress evaluation itself. A very important milestone is the use of finite element method to compute the calibration coefficients and to evaluate the residual stresses from the measured relieved deformations (Schajer, 1981). That allowed especially the evaluation of residual stresses which are not constant along the depth. It also brought further possibilities of using the method, e.g., for inhomogeneous materials, coatings, etc. The measurement and evaluation procedure is standardised by the norm ASTM E837 of the American Society for Testing and Materials which also contributed to the popularity of the method. The hole drilling is currently one of the most widespread methods of measuring the residual stress. Modern computational methods are used for the evaluation. The method is being developed especially in terms of drilling techniques and the possibilities of measuring the deformations. Some laboratories, such as the company MELIAD, offer residual stress measurement services and the sale of measurement equipment according to ASTM E837. Today this method is integrated within several large companies in the energy and aeronautics sectors.
Fundamental principles
The hole drilling method of measuring the residual stresses is based on drilling a small hole in the material surface. This relieves the residual stresses and the associated deformations around the hole. The relieved deformations are measured in at least three independent directions around the hole. The original residual stress in the material is then evaluated based on the measured deformations and using the so-called calibration coefficients. The hole is made by a cylindrical end mill or by alternative techniques. Deformations are most often measured using strain gauges (strain gauge rosettes).
The biaxial stress in the surface plane can be measured. The method is often referred to as semi-destructive thanks to the small material damage. The method is relatively simple, fast, the measuring device is usually portable. Disadvantages include the destructive character of the technique, limited resolution, and a lower accuracy of the evaluation in the case of nonuniform stresses or inhomogeneous material properties.
The so-called calibration coefficients play an important role in the residual stress evaluation. They are used to convert the relieved deformations to the original residual stress in the material. The coefficients can be theoretically derived for a through hole and a homogeneous stress. Then they depend only on the material properties, hole radius, and the distance from the hole. In the vast majority of practical applications, however, the preconditions for using the theoretically derived coefficients are not met, e.g., the integral deformation over the tensometer area is not included, the hole is blind instead of through, etc. Therefore, coefficients taking into account the practical aspects of measuring are used. They are mostly determined by a numerical computation using the finite element method. They express the relation between the relieved deformations and the residual stresses, taking into account the hole size, hole depth, shape of the tensometric rosette, material, and other parameters.
The evaluation of the residual stresses depends on the method used to calculate them from the measured relieved deformations. All the evaluation methods are built on the basic principles. They differ in the preconditions for use, the accuracy requirements on the calibration coefficients, or the possibility to take additional influences into account. In general, the hole is made in successive steps and the relieved deformations are measured after each step.
Evaluation methods for the residual stress
Several methods have been developed for the evaluation of residual stresses from the relieved deformations. The fundamental method is the equivalent uniform stress method. The coefficients for particular hole diameter, rosette type, and hole depth are published in the norm ASTM E837. The method is suitable for a constant or little changing stress along the depth. It can be used as a guideline for non-constant stresses, however, the method may give highly distorted results.
The most general method is the integral method. It calculates the influence of the relieved stress in the given depth which, however, changes with the total depth of the hole. The calibration coefficients are expressed as matrices. The evaluation leads to a system of equations whose solution is a vector of residual stresses in particular depths. A numerical simulation is required to get the calibration coefficients. The integral method and its coefficients are defined in the norm ASTM E837.
There are other evaluation methods that have lower demands on the calibration coefficients and on the evaluation process itself. These include the average stress method and the incremental strain method. Both the methods are based on the assumption that the change in deformation is caused solely by the relieved stress on the drilled increment. They are suitable only if there are small changes in the stress profiles. Both the methods give numerically correct results for uniform stresses.
The power series method and the spline method are other modifications of the integral method. They both take into account both the distance of the stress effect from the surface and the total hole depth. Contrary to the integral method, the resulting stress values are approximated by a polynomial or a spline. The power series method is very stable but it cannot capture rapidly changing stress values. The spline method is more stable and less susceptible to errors than the integral method. It can capture the actual stress values better than the power series method. The main disadvantage are the complicated mathematical calculations needed to solve a system of nonlinear equations.
Using the hole drilling method
The hole drilling method finds its use in many industrial areas dealing with material production and processing. The most important technologies include heat treatment, mechanical and thermal surface finishing, machining, welding, coating, or manufacturing composites. Despite its relative universality, the method requires these fundamental preconditions to be met: the possibility to drill the material, the possibility to apply the tensometric rosettes (or other means of measuring the deformations), and the knowledge of the material properties. Additional conditions can affect the accuracy and repeatability of the measuring. These include especially the size and shape of the sample, distance of the measured area from the edges, homogeneity of the material, presence of residual stress gradients, etc. Hole drilling can be performed in the laboratory or as a field measurement, making it ideal for measuring actual stresses in large components that cannot be moved.
See also
Residual stress
Deep hole drilling
Friction drilling
External links
Measuring residual stresses by the hole drilling method, University of West Bohemia, New Technologies - Research Centre, department Thermomechanics of Technological Processes
Laboratory and Field Measurements of Residual Stress by Hole Drilling
References
Mechanical engineering | Hole drilling method | [
"Physics",
"Engineering"
] | 1,655 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
32,430,822 | https://en.wikipedia.org/wiki/Bioorthogonal%20chemistry | The term bioorthogonal chemistry refers to any chemical reaction that can occur inside of living systems without interfering with native biochemical processes. The term was coined by Carolyn R. Bertozzi in 2003. Since its introduction, the concept of the bioorthogonal reaction has enabled the study of biomolecules such as glycans, proteins, and lipids in real time in living systems without cellular toxicity. A number of chemical ligation strategies have been developed that fulfill the requirements of bioorthogonality, including the 1,3-dipolar cycloaddition between azides and cyclooctynes (also termed copper-free click chemistry), between nitrones and cyclooctynes, oxime/hydrazone formation from aldehydes and ketones, the tetrazine ligation, the isocyanide-based click reaction, and most recently, the quadricyclane ligation.
The use of bioorthogonal chemistry typically proceeds in two steps. First, a cellular substrate is modified with a bioorthogonal functional group (chemical reporter) and introduced to the cell; substrates include metabolites, enzyme inhibitors, etc. The chemical reporter must not alter the structure of the substrate dramatically to avoid affecting its bioactivity. Secondly, a probe containing the complementary functional group is introduced to react and label the substrate.
Although effective bioorthogonal reactions such as copper-free click chemistry have been developed, development of new reactions continues to generate orthogonal methods for labeling to allow multiple methods of labeling to be used in the same biosystems. Carolyn R. Bertozzi was awarded the Nobel Prize in Chemistry in 2022 for her development of click chemistry and bioorthogonal chemistry.
Etymology
The word bioorthogonal comes from Greek bio- "living" and orthogōnios "right-angled". Thus literally a reaction that goes perpendicular to a living system, thus not disturbing it.
Requirements for bioorthogonality
To be considered bioorthogonal, a reaction must fulfill a number of requirements:
Selectivity: The reaction must be selective between endogenous functional groups to avoid side reactions with biological compounds
Biological inertness: Reactive partners and resulting linkage should not possess any mode of reactivity capable of disrupting the native chemical functionality of the organism under study.
Chemical inertness: The covalent link should be strong and inert to biological reactions.
Kinetics: The reaction must be rapid so that covalent ligation is achieved prior to probe metabolism and clearance. The reaction must be fast, on the time scale of cellular processes (minutes) to prevent competition in reactions which may diminish the small signals of less abundant species. Rapid reactions also offer a fast response, necessary in order to accurately track dynamic processes.
Reaction biocompatibility: Reactions have to be non-toxic and must function in biological conditions taking into account pH, aqueous environments, and temperature. Pharmacokinetics are a growing concern as bioorthogonal chemistry expands to live animal models.
Accessible engineering: The chemical reporter must be capable of incorporation into biomolecules via some form of metabolic or protein engineering. Optimally, one of the functional groups is also very small so that it does not disturb native behavior.
Staudinger ligation
The Staudinger ligation is a reaction developed by the Bertozzi group in 2000 that is based on the classic Staudinger reaction of azides with triarylphosphines. It launched the field of bioorthogonal chemistry as the first reaction with completely abiotic functional groups although it is no longer as widely used. The Staudinger ligation has been used in both live cells and live mice.
Bioorthogonality
The azide can act as a soft electrophile that prefers soft nucleophiles such as phosphines. This is in contrast to most biological nucleophiles which are typically hard nucleophiles. The reaction proceeds selectively under water-tolerant conditions to produce a stable product.
Phosphines are completely absent from living systems and do not reduce disulfide bonds despite mild reduction potential. Azides had been shown to be biocompatible in FDA-approved drugs such as azidothymidine and through other uses as cross linkers. Additionally, their small size allows them to be easily incorporated into biomolecules through cellular metabolic pathways.
Mechanism
Classic Staudinger reaction
The nucleophilic phosphine attacks the azide at the electrophilic terminal nitrogen. Through a four-membered transition state, N2 is lost to form an aza-ylide. The unstable ylide is hydrolyzed to form phosphine oxide and a primary amine. However, this reaction is not immediately bioorthogonal because hydrolysis breaks the covalent bond in the aza-ylide.
Staudinger ligation
The reaction was modified to include an ester group ortho to the phosphorus atom on one of the aryl rings to direct the aza-ylide through a new path of reactivity in order to outcompete immediate hydrolysis by positioning the ester to increase local concentration. The initial nucleophilic attack on the azide is the rate-limiting step. The ylide reacts with the electrophilic ester trap through intramolecular cyclization to form a five-membered ring. This ring undergoes hydrolysis to form a stable amide bond.
Limitations
The phosphine reagents slowly undergo air oxidation in living systems. Additionally, it is likely that they are metabolized in vitro by cytochrome P450 enzymes.
The kinetics of the reactions are slow with second order rate constants around 0.0020 M−1•s−1. Attempts to increase nucleophilic attack rates by adding electron-donating groups to the phosphines improved kinetics, but also increased the rate of air oxidation.
The poor kinetics require that high concentrations of the phosphine be used which leads to problems with high background signal in imaging applications. Attempts have been made to combat the problem of high background through the development of a fluorogenic phosphine reagents based on fluorescein and luciferin, but the intrinsic kinetics remain a limitation.
Copper-free click chemistry
Copper-free click chemistry is a bioorthogonal reaction first developed by Carolyn Bertozzi as an activated variant of an azide alkyne Huisgen cycloaddition, based on the work by Karl Barry Sharpless et al. Unlike CuAAC, Cu-free click chemistry has been modified to be bioorthogonal by eliminating a cytotoxic copper catalyst, allowing reaction to proceed quickly and without live cell toxicity. Instead of copper, the reaction is a strain-promoted alkyne-azide cycloaddition (SPAAC). It was developed as a faster alternative to the Staudinger ligation, with the first generations reacting over sixty times faster. The bioorthogonality of the reaction has allowed the Cu-free click reaction to be applied within cultured cells, live zebrafish, and mice.
Copper toxicity
The classic copper-catalyzed azide-alkyne cycloaddition has been an extremely fast and effective click reaction for bioconjugation, but it is not suitable for use in live cells due to the toxicity of Cu(I) ions. Toxicity is due to oxidative damage from reactive oxygen species formed by the copper catalysts. Copper complexes have also been found to induce changes in cellular metabolism and are taken up by cells.
There has been some development of ligands to prevent biomolecule damage and facilitate removal in in vitro applications. However, it has been found that different ligand environments of complexes can still affect metabolism and uptake, introducing an unwelcome perturbation in cellular function.
Bioorthogonality
The azide group is particularly bioorthogonal because it is extremely small (favorable for cell permeability and avoids perturbations), metabolically stable, and does not naturally exist in cells and thus has no competing biological side reactions. Although azides are not the most reactive 1,3-dipole available for reaction, they are preferred for their relative lack of side reactions and stability in typical synthetic conditions. The alkyne is not as small, but it still has the stability and orthogonality necessary for in vivo labeling. Cyclooctynes are traditionally the most common cycloalkyne for labeling studies, as they are the smallest stable alkyne ring.
Mechanism
The reaction proceeds as a standard 1,3-dipolar cycloaddition, a type of asynchronous, concerted pericyclic shift. The ambivalent nature of the 1,3-dipole should make the identification of an electrophilic or nucleophilic center on the azide impossible such that the direction of the cyclic electron flow is meaningless. [p] However, computation has shown that the electron distribution amongst nitrogens causes the innermost nitrogen atom to bear the greatest negative charge.
Regioselectivity
Although the reaction produces a regioisomeric mixture of triazoles, the lack of regioselectivity in the reaction is not a major concern for most current applications. More regiospecific and less bioorthogonal requirements are best served by copper-catalyzed Huisgen cycloaddition, especially given the synthetic difficulty (compared to the addition of a terminal alkyne) of synthesizing a strained cyclooctyne.
Development of cyclooctynes
OCT was the first cyclooctyne developed for Cu-free click chemistry. While linear alkynes are unreactive at physiological temperatures, OCT was able readily react with azides in biological conditions while showing no toxicity. However, it was poorly water-soluble, and the kinetics were barely improved over the Staudinger ligation. ALO (aryl-less octyne) was developed to improve water solubility, but it still had poor kinetics.
Monofluorinated (MOFO) and difluorinated (DIFO) cyclooctynes were created to increase the rate through the addition of electron-withdrawing fluorine substituents at the propargylic position. Fluorine is a good electron-withdrawing group in terms of synthetic accessibility and biological inertness. In particular, it cannot form an electrophilic Michael acceptor that may side-react with biological nucleophiles.
DIBO (dibenzocyclooctyne) was developed as a fusion to two aryl rings, resulting in very high strain and a decrease in distortion energies. It was proposed that biaryl substitution increases ring strain and provides conjugation with the alkyne to improve reactivity. Although calculations have predicted that mono-aryl substitution would provide an optimal balance between steric clash (with azide molecule) and strain, monoarylated products have been shown to be unstable.
BARAC (biarylazacyclooctynone) followed with the addition of an amide bond which adds an sp2-like center to increase rate by distortion. Amide resonance contributes additional strain without creating additional unsaturation which would lead to an unstable molecule. Additionally, the addition of a heteroatom into the cyclooctyne ring improves both solubility and pharmacokinetics of the molecule. BARAC has sufficient rate (and sensitivity) to the extent that washing away excess probe is unnecessary to reduce background. This makes it extremely useful in situations where washing is impossible as in real-time imaging or whole animal imaging. Although BARAC is extremely useful, its low stability requires that it must be stored at 0 °C, protected from light and oxygen.
Further adjustments variations on BARAC to produce DIBAC/ADIBO were performed to add distal ring strain and reduce sterics around the alkyne to further increase reactivity. Keto-DIBO, in which the hydroxyl group has been converted to a ketone, has a three-fold increase in rate due to a change in ring conformation. Attempts to make a difluorobenzocyclooctyne (DIFBO) were unsuccessful due to the instability.
Problems with DIFO with in vivo mouse studies illustrate the difficulty of producing bioorthogonal reactions. Although DIFO was extremely reactive in the labeling of cells, it performed poorly in mouse studies due to binding with serum albumin. Hydrophobicity of the cyclooctyne promotes sequestration by membranes and serum proteins, reducing bioavailable concentrations. In response, DIMAC (dimethoxyazacyclooctyne) was developed to increase water solubility, polarity, and pharmacokinetics, although efforts in bioorthogonal labeling of mouse models is still in development.
Reactivity
Computational efforts have been vital in explaining the thermodynamics and kinetics of these cycloaddition reactions which has played a vital role in continuing to improve the reaction. There are two methods for activating alkynes without sacrificing stability: decrease transition state energy or decrease reactant stability.
Decreasing reactant stability: Houk has proposed that differences in the energy (Ed ‡) required to distort the azide and alkyne into the transition state geometries control the barrier heights for the reaction. The activation energy (E ‡) is the sum of destabilizing distortions and stabilizing interactions (Ei ‡). The most significant distortion is in the azide functional group with lesser contribution of alkyne distortion. However, it is only the cyclooctyne that can be easily modified for higher reactivity. Calculated barriers of reaction for phenyl azide and acetylene (16.2 kcal/mol) versus cyclooctyne (8.0 kcal/mol) results in a predicted rate increase of 106. The cyclooctyne requires less distortion energy (1.4 kcal/mol versus 4.6 kcal/mol) resulting in a lower activation energy despite smaller interaction energy.
Decreasing transition state energy: Electron withdrawing groups such as fluorine increase rate by decreasing LUMO energy and the HOMO-LUMO gap. This leads to a greater charge transfer from the azide to the fluorinated cyclooctyne in the transition state, increasing interaction energy (lower negative value) and overall activation energy. The lowering of the LUMO is the result of hyperconjugation between alkyne π donor orbitals and CF σ* acceptors. These interactions provide stabilization primarily in the transition state as a result of increased donor/acceptor abilities of the bonds as they distort. NBO calculations have shown that transition state distortion increases the interaction energy by 2.8 kcal/mol.
The hyperconjugation between out-of-plane π bonds is greater because the in-plane π bonds are poorly aligned. However, transition state bending allows the in-plane π bonds to have a more antiperiplanar arrangement that facilitates interaction. Additional hyperconjugative interaction energy stabilization is achieved through an increase in the electronic population of the σ* due to the forming CN bond. Negative hyperconjugation with the σ* CF bonds enhances this stabilizing interaction.
Regioselectivity
Although regioselectivity is not a great issue in the current imaging applications of copper-free click chemistry, it is an issue that prevents future applications in fields such as drug design or peptidomimetics.
Currently most cyclooctynes react to form regioisomeric mixtures. [m] Computation analysis has found that while gas phase regioselectivity is calculated to favor 1,5 addition over 1,4 addition by up to 2.9 kcal/mol in activation energy, solvation corrections result in the same energy barriers for both regioisomers. While the 1,4 isomer in the cycloaddition of DIFO is disfavored by its larger dipole moment, solvation stabilizes it more strongly than the 1,5 isomer, eroding regioselectivity.
Symmetrical cyclooctynes such as BCN (bicyclo[6.1.0]nonyne) form a single regioisomer upon cycloaddition and may serve to address this problem in the future.
Applications
The most widespread application of copper-free click chemistry is in biological imaging in live cells or animals using an azide-tagged biomolecule and a cyclooctyne bearing an imaging agent.
Fluorescent keto and oxime variants of DIBO are used in fluoro-switch click reactions in which the fluorescence of the cyclooctyne is quenched by the triazole that forms in the reaction. On the other hand, coumarin-conjugated cyclooctynes such as coumBARAC have been developed such that the alkyne suppresses fluorescence while triazole formation increases the fluorescence quantum yield by ten-fold.
Spatial and temporal control of substrate labeling has been investigated using photoactivatable cyclooctynes. This allows equilibration of the alkyne prior to reaction in order to reduce artifacts as a result of concentration gradients. Masked cyclooctynes are unable to react with azides in the dark but become reactive alkynes upon irradiation with light.
Copper-free click chemistry is being explored for use in synthesizing PET imaging agents which must be made quickly with high purity and yield in order to minimize isotopic decay before the compounds can be administered. Both the high rate constants and the bioorthogonality of SPAAC are amenable to PET chemistry.
Other bioorthogonal reactions
Nitrone dipole cycloaddition
Copper-free click chemistry has been adapted to use nitrones as the 1,3-dipole rather than azides and has been used in the modification of peptides.
This cycloaddition between a nitrone and a cyclooctyne forms N-alkylated isoxazolines. The reaction rate is enhanced by water and is extremely fast with second order rate constants ranging from 12 to 32 M−1•s−1, depending on the substitution of the nitrone. Although the reaction is extremely fast, it faces problems in incorporating the nitrone into biomolecules through metabolic labeling. Labeling has only been achieved through post-translational peptide modification.
Norbornene cycloaddition
1,3 dipolar cycloadditions have been developed as a bioorthogonal reaction using a nitrile oxide as a 1,3-dipole and a norbornene as a dipolarophile. Its primary use has been in labeling DNA and RNA in automated oligonucleotide synthesizers, and polymer crosslinking in the presence of living cells.
Norbornenes were selected as dipolarophiles due to their balance between strain-promoted reactivity and stability. The drawbacks of this reaction include the cross-reactivity of the nitrile oxide due to strong electrophilicity and slow reaction kinetics.
Oxanorbornadiene cycloaddition
The oxanorbornadiene cycloaddition is a 1,3-dipolar cycloaddition followed by a retro-Diels Alder reaction to generate a triazole-linked conjugate with the elimination of a furan molecule. Preliminary work has established its usefulness in peptide labeling experiments, and it has also been used in the generation of SPECT imaging compounds. More recently, the use of an oxanorbornadiene was described in a catalyst-free room temperature "iClick" reaction, in which a model amino acid is linked to the metal moiety, in a novel approach to bioorthogonal reactions.
Ring strain and electron deficiency in the oxanorbornadiene increase reactivity towards the cycloaddition rate-limiting step. The retro-Diels Alder reaction occurs quickly afterwards to form the stable 1,2,3 triazole. Problems include poor tolerance for substituents which may change electronics of the oxanorbornadiene and low rates (second order rate constants on the order of 10−4).
Tetrazine ligation
The tetrazine ligation is the reaction of a trans-cyclooctene and an s-tetrazine in an inverse-demand Diels Alder reaction followed by a retro-Diels Alder reaction to eliminate nitrogen gas. The reaction is extremely rapid with a second order rate constant of 2000 M−1–s−1 (in 9:1 methanol/water) allowing modifications of biomolecules at extremely low concentrations.
Based on computational work by Bach, the strain energy for Z-cyclooctenes is 7.0 kcal/mol compared to 12.4 kcal/mol for cyclooctane due to a loss of two transannular interactions. E-cyclooctene has a highly twisted double bond resulting in a strain energy of 17.9 kcal/mol. As such, the highly strained trans-cyclooctene is used as a reactive dienophile. The diene is a 3,6-diaryl-s-tetrazine which has been substituted in order to resist immediate reaction with water. The reaction proceeds through an initial cycloaddition followed by a reverse Diels Alder to eliminate N2 and prevent reversibility of the reaction.
Not only is the reaction tolerant of water, but it has been found that the rate increases in aqueous media. Reactions have also been performed using norbornenes as dienophiles at second order rates on the order of 1 M−1•s−1 in aqueous media. The reaction has been applied in labeling live cells and polymer coupling.
[4+1] Cycloaddition
This isocyanide click reaction is a [4+1] cycloaddition followed by a retro-Diels Alder elimination of N2.
The reaction proceeds with an initial [4+1] cycloaddition followed by a reversion to eliminate a thermodynamic sink and prevent reversibility. This product is stable if a tertiary amine or isocyanopropanoate is used. If a secondary or primary isocyanide is used, the produce will form an imine which is quickly hydrolyzed.
Isocyanide is a favored chemical reporter due to its small size, stability, non-toxicity, and absence in mammalian systems. However, the reaction is slow, with second order rate constants on the order of 10−2 M−1•s−1.
Tetrazole photoclick chemistry
Photoclick chemistry utilizes a photoinduced cycloelimination to release N2. This generates a short-lived 1,3 nitrile imine intermediate via the loss of nitrogen gas, which undergoes a 1,3-dipolar cycloaddition with an alkene to generate pyrazoline cycloadducts.
Photoinduction takes place with a brief exposure to light (wavelength is tetrazole-dependent) to minimize photodamage to cells. The reaction is enhanced in aqueous conditions and generates a single regioisomer.
The transient nitrile imine is highly reactive for 1,3-dipolar cycloaddition due to a bent structure which reduces distortion energy. Substitution with electron-donating groups on phenyl rings increases the HOMO energy, when placed on the 1,3 nitrile imine and increases the rate of reaction.
Advantages of this approach include the ability to spatially or temporally control reaction and the ability to incorporate both alkenes and tetrazoles into biomolecules using simple biological methods such as genetic encoding. Additionally, the tetrazole can be designed to be fluorogenic in order to monitor progress of the reaction.
Quadricyclane ligation
The quadricyclane ligation utilizes a highly strained quadricyclane to undergo [2+2+2] cycloaddition with π systems.
Quadricyclane is abiotic, unreactive with biomolecules (due to complete saturation), relatively small, and highly strained (~80 kcal/mol). However, it is highly stable at room temperature and in aqueous conditions at physiological pH. It is selectively able to react with electron-poor π systems but not simple alkenes, alkynes, or cyclooctynes.
Bis(dithiobenzil)nickel(II) was chosen as a reaction partner out of a candidate screen based on reactivity. To prevent light-induced reversion to norbornadiene, diethyldithiocarbamate is added to chelate the nickel in the product.
These reactions are enhanced by aqueous conditions with a second order rate constant of 0.25 M−1•s−1. Of particular interest is that it has been proven to be bioorthogonal to both oxime formation and copper-free click chemistry.
Uses
Bioorthogonal chemistry is an attractive tool for pretargeting experiments in nuclear imaging and radiotherapy.
References
Biochemical reactions
Chemical biology
2003 neologisms | Bioorthogonal chemistry | [
"Chemistry",
"Biology"
] | 5,308 | [
"Biochemistry",
"Chemical biology",
"nan",
"Biochemical reactions"
] |
32,437,920 | https://en.wikipedia.org/wiki/Triple%20helix | In the fields of geometry and biochemistry, a triple helix (: triple helices) is a set of three congruent geometrical helices with the same axis, differing by a translation along the axis. This means that each of the helices keeps the same distance from the central axis. As with a single helix, a triple helix may be characterized by its pitch, diameter, and handedness. Examples of triple helices include triplex DNA, triplex RNA, the collagen helix, and collagen-like proteins.
Structure
A triple helix is named such because it is made up of three separate helices. Each of these helices shares the same axis, but they do not take up the same space because each helix is translated angularly around the axis. Generally, the identity of a triple helix depends on the type of helices that make it up. For example: a triple helix made of three strands of collagen protein is a collagen triple helix, and a triple helix made of three strands of DNA is a DNA triple helix.
As with other types of helices, triple helices have handedness: right-handed or left-handed. A right-handed helix moves around its axis in a clockwise direction from beginning to end. A left-handed helix is the right-handed helix's mirror image, and it moves around the axis in a counterclockwise direction from beginning to end. The beginning and end of a helical molecule are defined based on certain markers in the molecule that do not change easily. For example: the beginning of a helical protein is its N terminus, and the beginning of a single strand of DNA is its 5' end.
The collagen triple helix is made of three collagen peptides, each of which forms its own left-handed polyproline helix. When the three chains combine, the triple helix adopts a right-handed orientation. The collagen peptide is composed of repeats of Gly-X-Y, with the second residue (X) usually being Pro and the third (Y) being hydroxyproline.
A DNA triple helix is made up of three separate DNA strands, each oriented with the sugar/phosphate backbone on the outside of the helix and the bases on the inside of the helix. The bases are the part of the molecule closest to the triple helix's axis, and the backbone is the part of the molecule farthest away from the axis. The third strand occupies the major groove of relatively normal duplex DNA. The bases in triplex DNA are arranged to match up according to a Hoogsteen base pairing scheme. Similarly, RNA triple helices are formed as a result of a single stranded RNA forming hydrogen bonds with an RNA duplex; the duplex consists of Watson-Crick base pairing while the third strand binds via Hoogsteen base pairing.
Stabilizing factors
The collagen triple helix has several characteristics that increase its stability. When proline is incorporated into the Y position of the Gly-X-Y sequence, it is post-translationally modified to hydroxyproline. The hydroxyproline can enter into favorable interactions with water, which stabilizes the triple helix because the Y residues are solvent-accessible in the triple helix structure. The individual helices are also held together by an extensive network of amide-amide hydrogen bonds formed between the strands, each of which contributes approximately -2 kcal/mol to the overall free energy of the triple helix. The formation of the superhelix not only protects the critical glycine residues on the interior of the helix, but also protects the overall protein from proteolysis.
Triple helix DNA and RNA are stabilized by many of the same forces that stabilize double-stranded DNA helices. With nucleotide bases oriented to the inside of the helix, closer to its axis, bases engage in hydrogen bonding with other bases. The bonded bases in the center exclude water, so the hydrophobic effect is particularly important in the stabilization of DNA triple helices.
Biological role
Proteins
Members of the collagen superfamily are major contributors to the extracellular matrix. The triple helical structure provides strength and stability to collagen fibers by providing great resistance to tensile stress. The rigidity of the collagen fibers is an important factor that can withstand most mechanical stress, making it an ideal protein for macromolecular transport and overall structural support throughout the body.
DNA
There are some oligonucleotide sequences, called triplet-forming oligonucleotides (TFOs) that can bind to form a triplex with a longer molecule of double-stranded DNA; TFOs can inactivate a gene or help to induce mutations. TFOs can only bind to certain sites in a larger molecule, so researchers must first determine whether a TFO can bind to the gene of interest. Twisted intercalating nucleic acid is sometimes used to improve this process. Mapping of genome-wide TFO-TTS pairs by sequencing is a useful way to study the triplex forming DNA in the whole genome using oligo-library.
RNA
In recent years, the biological function of triplex RNA has become more studied. Some roles include increasing stability, translation, influencing ligand binding, and catalysis. One example of ligand binding being influenced by a triple helix is in the SAM-II riboswitch where the triple helix creates a binding site that will uniquely accept S-adenosylmethionine (SAM). The ribonucleoprotein complex telomerase, responsible for replicating the tail-ends of DNA (telomeres) also contains triplex RNA believed to be necessary for proper telomerase functioning. The triple helix at the 3' end of the PAN and MALAT1 long-noncoding RNAs serves to stabilize the RNA by protecting the Poly(A) tail from deadenylation, which subsequently affect their functions in viral pathogenesis and multiple human cancers. Additionally, RNA triple helices can stabilize mRNAs by formation of a poly(A) tail 3'-end binding pocket.
Computational Tools
TDF (Triplex Domain Finder)
TDF is a Python-based package to predict RNA-DNA triplex formation potential. The software starts by enumerating the substrings between TFO and TTS and uses statistical tests to find out significant result compared to the background.
Triplexfpp
Triplexfpp is based on deep learning methods. This python-based pipelines can help predict the most likely triplex-forming lncRNA. However since the lncRNA for training is limited, there is a long way to go before machine learning and deep learning methods can be applied.
References
Curves
Geometric shapes
Helices
Protein structural motifs | Triple helix | [
"Mathematics",
"Biology"
] | 1,391 | [
"Geometric shapes",
"Mathematical objects",
"Protein classification",
"Protein structural motifs",
"Geometric objects"
] |
32,439,784 | https://en.wikipedia.org/wiki/Physical%20mathematics | The subject of physical mathematics is concerned with mathematics that is motivated by physics and is considered by some as a subfield of mathematical physics.
Overview
Physically motivated mathematics existed within a tradition of mathematical analysis of nature that goes back to the ancient Greeks. A good example is Archimedes' Method of Mechanical Theorems, where the principle of the balance is used to find results in pure geometry. This tradition, elaborated further by Islamic and Byzantine scholars, was reintroduced to the West in the 12th century and during the Renaissance. It became known as "mixed mathematics" and was a major contributor to the emergence of modern mathematical physics in the 17th century.
The details of physical units and their manipulation were addressed by Alexander Macfarlane in Physical Arithmetic in 1885. The science of kinematics created a need for mathematical representation of motion and has found expression with complex numbers, quaternions, and linear algebra.
At the University of Cambridge the Mathematical Tripos tested students on their knowledge of "mixed mathematics". "... [N]ew books which appeared in the mid-eighteenth century offered a systematic introduction to the fundamental operations of the fluxional calculus and showed how it could be applied to a wide range of mathematical and physical problems. ... The strongly problem-oriented presentation in the treatises ... made it much easier for university students to master the fluxional calculus and its applications [and] helped define a new field of mixed mathematical studies..."
An adventurous expression of physical mathematics is found in Maxwell's A Treatise on Electricity and Magnetism, which used partial differential equations. The text aspired to describe phenomena in four dimensions, but the foundation for this physical world, Minkowski space, trailed by forty years.
String theorist Greg Moore said this about physical mathematics in his vision talk at Strings 2014.
See also
Theoretical physics
Mathematical physics
References
Eric Zaslow, Physmatics,
Arthur Jaffe, Frank Quinn, "Theoretical mathematics: Toward a cultural synthesis of mathematics and theoretical physics", Bulletin of the American Mathematical Society 30: 178-207, 1994,
Michael Atiyah et al., "Responses to Theoretical Mathematics: Toward a cultural synthesis of mathematics and theoretical physics, by A. Jaffe and F. Quinn", Bull. Am. Math. Soc. 30: 178-207, 1994,
Michael Stöltzner, "Theoretical Mathematics: On the Philosophical Significance of the Jaffe-Quinn Debate", in: The Role of Mathematics in Physical Sciences, pages 197-222,
Kevin Hartnett (November 30, 2017) "Secret link discovered between pure math and physics", Quanta Magazine
Applied mathematics
Mathematical physics | Physical mathematics | [
"Physics",
"Mathematics"
] | 535 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
57,138,844 | https://en.wikipedia.org/wiki/Activation%20energy%20asymptotics | Activation energy asymptotics (AEA), also known as large activation energy asymptotics, is an asymptotic analysis used in the combustion field utilizing the fact that the reaction rate is extremely sensitive to temperature changes due to the large activation energy of the chemical reaction.
History
The techniques were pioneered by the Russian scientists Yakov Borisovich Zel'dovich, David A. Frank-Kamenetskii and co-workers in the 30s, in their study on premixed flames and thermal explosions (Frank-Kamenetskii theory), but not popular to western scientists until the 70s. In the early 70s, due to the pioneering work of Williams B. Bush, Francis E. Fendell, Forman A. Williams, Amable Liñán and John F. Clarke, it became popular in western community and since then it was widely used to explain more complicated problems in combustion.
Method overview
In combustion processes, the reaction rate is dependent on temperature in the following form (Arrhenius law),
where is the activation energy, and is the universal gas constant. In general, the condition is satisfied, where is the burnt gas temperature. This condition forms the basis for activation energy asymptotics. Denoting for unburnt gas temperature, one can define the Zel'dovich number and heat release parameter as follows
In addition, if we define a non-dimensional temperature
such that approaching zero in the unburnt region and approaching unity in the burnt gas region (in other words, ), then the ratio of reaction rate at any temperature to reaction rate at burnt gas temperature is given by
Now in the limit of (large activation energy) with , the reaction rate is exponentially small i.e., and negligible everywhere, but non-negligible when . In other words, the reaction rate is negligible everywhere, except in a small region very close to burnt gas temperature, where . Thus, in solving the conservation equations, one identifies two different regimes, at leading order,
Outer convective-diffusive zone
Inner reactive-diffusive layer
where in the convective-diffusive zone, reaction term will be neglected and in the thin reactive-diffusive layer, convective terms can be neglected and the solutions in these two regions are stitched together by matching slopes using method of matched asymptotic expansions. The above mentioned two regime are true only at leading order since the next order corrections may involve all the three transport mechanisms.
See also
Zeldovich–Frank-Kamenetskii equation
Burke–Schumann limit
References
Fluid dynamics
Combustion
Asymptotic analysis | Activation energy asymptotics | [
"Chemistry",
"Mathematics",
"Engineering"
] | 549 | [
"Mathematical analysis",
"Chemical engineering",
"Combustion",
"Asymptotic analysis",
"Piping",
"Fluid dynamics"
] |
57,142,185 | https://en.wikipedia.org/wiki/Shiftphone | Shiftphone is a modular, easy-to-repair smartphone brand created by the company SHIFT in Germany. The company emphasizes fair trade and ecology, similar to Fairphone. Instead of tantalum capacitors made from coltan, ceramic capacitors are used for their manufacturing. So far, eleven model series have been released. The most recent release was the SHIFT6mq (successor of the Shift6m) in June 2020. The upcoming device will be the SHIFTphone 8, scheduled for release in 2024.
Names
The official company name is SHIFT GmbH.
Model names
The names of models start with the string "SHIFT" in upper case letters.
Except for the SHIFTphone 8 this is followed by the length of the display diagonal rounded to full inches. After that
the name ends - for the oldest models
a dot follows, in turn followed by a number - some newer models until around 2017
an m follows - this is for the new modular line
OS name
The OS is named either SHIFT-OS or ShiftOS.
Devices
Operating system
There are two different operating systems. The SafetyNet based SHIFT-OS-G/ShiftOS-G with Google services or the AOSP-based SHIFT-OS-L or ShiftOS-L without Google services. Furthermore, flashing the device with a custom ROM is allowed; ShiftOS developers are also partly involved in the development of certain custom ROMs.
Characteristics
Sustainability
Shiftphones are built modularly to allow customers to change parts and repair the device without voiding the warranty. Videos support the user in repairing their device, explaining how to open it and how to change certain modules.
Circular economy
Customers have the option, to upgrade their device to a different model.
Shiftphone partners with Closing the Loop.
Privacy aspects
The operating system can be replaced with a custom ROM based on GNU/Linux (Mobian) or a de-googled Android.
Different from the competitors PinePhone and Librem, the motherboard and peripherals are not open-source hardware. This makes hardware backdoors still possible.
The SHIFTphone 8 does include hardware kill switches, for example for the microphone and camera.
Workers' care
Shift employees in China do not work more than 50 hours a week, while it is common for people to work up to 90. Compared to the average Chinese worker in the manufacturing business, the staff is provided with insurances.
Criticism and controversies about conflict minerals
In 2016 c't described the Shift5 as a typical cheap smartphone. Besides, the journal argued that there was no evidence that coltan is not used in Shiftphones and thus criticized the transparency of SHIFT. SHIFT and further secondary sources claim that coltan is not in use for their manufacturing. However, according to c't, the SHIFT partner company "Vstar and Weihuaxin" did not provide information about conflict-free material used in Shiftphone. Unlike Shiftphone, Fairphone provides detailed audit reports about component suppliers through a Chinese agency, and also facilitates detailed information on problems and compromises in the supply chain.
Coltan is used to make components for mobile phones and other electronic devices. A huge part of the ore is from mines in the DRC (Democratic Republic of Congo). "Much mining has been done in small artisanal mining operations, sometimes known as Artisanal and Small-Scale Mining (ASM). These small-scale mines are unregulated, with high levels of child labor and workplace injury." Some 50,000 children, some just seven years old, work in Congo's coltan mines. Workers often have little or no protection and often work underground in self-made shafts.
Recent reports paint a clear picture: articles by many magazines were able to capture the statements of Carsten Waldeck and prove their credibility accordingly.
For example, golem.de reported in detail on the company and its efforts in terms of sustainability and fairness in June 2018.
The ProSieben magazine Galileo tested the newly released smartphone Shift6m and illuminated, in the form of video recordings, the production conditions of the in-house manufactory located in China in June 2018.
N-tv described the initial efforts for fairness and sustainability as well as the history of the Shiftphone, in September 2018.
In August 2018, the ecology portal no longer reported any lack of transparency regarding Shift's Chinese hardware manufacturing process.
In the issue 15/2018, the computer magazine c't showed a more positive approach on the topic of German smartphone manufacturer Shift, although the report itself was rather short in comparison to other European hardware providers.
See also
Fairphone
Ethical consumerism
Fair trade
Green IT
Open-source hardware
Phonebloks
Framework Computer
References
External links
Official site of SHIFT GmbH
Smartphones
Mobile phone manufacturers
Mobile phone companies of Germany
Modular smartphones
Fair trade brands
Right to repair
German companies established in 2014
Android (operating system) devices
Companies based in Hesse
German brands | Shiftphone | [
"Engineering"
] | 989 | [
"Modular design",
"Modular smartphones"
] |
57,142,226 | https://en.wikipedia.org/wiki/Equilibrium%20catalyst | Equilibrium Catalyst refers to the deactivated or spent catalyst after use in a chemical reaction.
The main player in oil refining processes such as fluid catalytic cracking (FCC), hydroprocessing, hydrocracking is the catalyst or zeolitic material, that breaks down complex and long-chain hydrocarbons into simple, useful hydrocarbons.
Over longer periods of time, there is significant loss in the activity of the catalyst and it can no longer function properly. The inhibitions in catalytic performance are accounted by different factors such as physical losses, steam, high temperature, time, coke formation and poisoning from metal contaminants in feedstock. This type of deactivated catalyst is referred to as “used or spent” catalyst or equilibrium catalyst or simply “ECAT”.
In FCC processes, the equilibrium catalyst is a physical mixture of varying proportions of fresh catalyst and regenerated catalyst or aged catalyst, circulating within the FCC column. The equilibrium catalyst withdrawn as catalytically less active is spent catalyst and gets replaced with an equivalent amount of fresh catalyst. Spent FCC catalysts have low flammability and toxicity as compared to spent hydroprocessing catalysts, however are not of benign nature and there is a risk of leaching of its components.
Whereas, in hydroprocessing, the equilibrium catalyst or spent catalyst is entirely replaced with fresh catalyst upon loss in catalyst activity.
Spent catalyst disposal
The disposal of spent catalyst is gaining importance, particularly because of strict environmental regulations and high prices of fresh catalyst. Landfills and approved dumping sites have been predominantly used to get rid of the spent catalyst. Catalysts containing metals (nickel, vanadium, molybdenum) classified as hazardous are pre-treated before disposal. Sale of spent catalyst to the cement industry or its reuse in construction sites, metal casting industry, in road building offers immediate disposal solutions but with no economic benefits. Depending upon the quality of the spent catalyst, a specific property/attribute of the ECAT might be desirable in other processes. With some modifications in spent catalyst compositions, it could be reused in less severe processes.
References
Oil refining
Chemical processes | Equilibrium catalyst | [
"Chemistry"
] | 433 | [
"Petroleum technology",
"Chemical processes",
"Oil refining",
"nan",
"Chemical process engineering"
] |
43,689,474 | https://en.wikipedia.org/wiki/Coolant%20pump | A coolant pump is a type of pump used to recirculate a coolant, generally a liquid, that is used to transfer heat away from an engine or other device that generates heat as a byproduct of producing energy.
Common applications of coolant pumps are:
Coolant pump or water pump, found in most modern internal combustion engine applications such as most fossil fuel powered vehicles
Coolant pumps, found in pressurized water reactors, a type of light water reactor used in the majority of Western world nuclear power plants
Pumps
Cooling technology | Coolant pump | [
"Physics",
"Chemistry"
] | 111 | [
"Physical systems",
"Hydraulics",
"Turbomachinery",
"Pumps"
] |
43,691,183 | https://en.wikipedia.org/wiki/Structures%20for%20lossless%20ion%20manipulations | Structures for lossless ion manipulations (SLIM) are a form of ion optics to which various radio frequency and dc electric potentials can be applied and used to enable a broad range of ion manipulations, such as separations based upon ion mobility spectrometry, reactions (unimolecular, ion-molecule, and ion-ion), and storage (i.e. ion trapping). SLIM was developed by Richard D. Smith and coworkers at Pacific Northwest National Laboratory (PNNL) and are generally fabricated from arrays of electrodes on evenly spaced planar surfaces. In 2017, Erin S. Baker, Sandilya Garimella, Yehia Ibrahim, Richard D. Smith and Ian Webb from the Interactive Omics Group of PNNL received the R&D 100 Award for the development of SLIM.
In SLIM, ions move in the space between the two surfaces, in directions controlled using electric fields, and also moved between different of multi-level SLIM, as can be constructed from a stack of printed circuit boards (PCBs). The lossless nature of SLIM is derived from the use of rf electric fields, and particularly the pseudo potential derived from the inhomogeneous electric fields resulting from rf of appropriate frequency applied to multiple adjacent electrodes, and that serves to prevent ions from closely approaching the electrodes and surface where loss would conventionally be expected. SLIM are generally used in conjunction with mass spectrometry for analytical applications.
Construction
The first SLIM were fabricated using PCB technology to demonstrate a range of simple ion manipulations in gases at low pressures (a few torr). This SLIM technology has conceptual similarities with integrated electronic circuits, but instead of moving electrons, electric fields were used to create pathways, switches, etc. to manipulate ions in the gas phase.
SLIM devices can enable complex sequences of ion separations, transfers and trapping to occur in the space between two surfaces positioned (e.g., ~4 mm apart) and each patterned with conductive electrodes. The SLIM devices use the inhomogeneous electric fields created by arrays of closely spaced electrodes to which readily generated peak-to-peak RF voltages (e.g., Vp-p ~ 100 V; ~ 1 MHz) are applied with opposite polarity on adjacent electrodes to create effective potential fields that prevent ions from approaching the surfaces. The operating pressure for SLIM devices has initially been reported to be in the 1-10 torr range which allows ions to be effectively confined using the previously defined RF potentials. At higher pressures, the capacity to confine ions diminishes without additional forces being placed on the ion populations.
The confinement functions over a range of pressures (<0.1 torr to ~50 torr), and over an adjustable mass-to-charge ratio (m/z) range (e.g., m/z 200 to >2000). This effective potential works in conjunction with DC potentials applied to side electrodes to prevent ion losses, and allows creating ion traps and conduits in the gap between the two surfaces for the effectively lossless storage and movement of ions as a result of any gradient in the applied DC fields.
The two mirrored halves of a SLIM system are shown in the example to the left. Compared to the longer pathlength systems developed at PNNL, this board is considerably shorter but serves as a rapid prototype. When folded together and spaced ~3 mm apart, the co-planar electrode surfaces create the fields needed for ion confinement and separation.
References
Further reading
Mass spectrometry
Ions | Structures for lossless ion manipulations | [
"Physics",
"Chemistry"
] | 731 | [
"Matter",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Ions"
] |
50,924,898 | https://en.wikipedia.org/wiki/David%20A.%20Lucht | David Allen Lucht (; born February 18, 1943) is an American engineer and fire safety expert. His career was devoted to public service in government, academia and the nonprofit sector. He served as the Ohio State Fire Marshal; the first presidential appointee to serve in the United States Fire Administration and the inaugural head of the graduate degree fire protection engineering program at Worcester Polytechnic Institute, where he served for 25 years
Early years
David Lucht was born and raised in the rural village of Middlefield, Ohio. In 1960, local Fire Chief Earl Warne invited him to join the first class of student volunteer firefighters in the Middlefield Volunteer Fire Department where he actively served until graduating from high school in 1961. Early in his service, he responded to a residential fire in which three young children died, leaving an indelible imprint on him.
He attended the Illinois Institute of Technology in Chicago under a four-year scholarship granted by the Western Actuarial Bureau. He received his Bachelor of Science degree in fire protection and safety engineering in 1965. After graduating from IIT, he moved to Columbus, Ohio to work for his scholarship sponsor for three years.
The Ohio State University
In 1968 Lucht moved on to the position of research associate at The Ohio State University Engineering Experiment Station, Building Research Laboratory, performing fire tests on building construction systems and materials.
An interest in home smoke alarms developed during his time at OSU, stimulated by the development of the first affordable devices by Duane Pearsall of Denver, CO. As chair of the Central Ohio Fire Prevention Association Household Fire Warning Study Committee, Lucht organized The Alton Road Tests aimed at demonstrating the effectiveness of home smoke detectors in actual dwellings.
After his career-long advocacy for home smoke alarms, he later described the early devices as "the most important technological [fire safety] breakthrough of the 20th century."
Ohio State Fire Marshal
In 1972, Lucht joined the Ohio Division of State Fire Marshal where he authored the first Ohio Fire Code. Ohio Governor John J. Gilligan appointed him as the Ohio State Fire Marshal in 1973.
During his tenure in the Fire Marshal Division, Ohio adopted the first statewide requirements for home smoke detectors, developed the digital Ohio Fire Incident Reporting System, and the Ohio Arson Laboratory, and completed plans for the Ohio Fire Academy.
United States Fire Administration
In 1975, President Gerald R. Ford appointed David Lucht Deputy Administrator of the United States Fire Administration (originally named the National Fire Prevention and Control Administration, NFPCA) after confirmation by the Senate. The new agency had been created by Congress in direct response to the landmark America Burning report of the National Commission on Fire Prevention and Control. He also served as acting head of the new agency until Howard Tipton was appointed to the Administrator post a few months later.
He played a key role in implementing the mandates of the Fire Prevention and Control Act. Areas of focus included the National Fire Academy, the National Fire Incident Reporting System, fire research and public education programs to support practitioners on the state and local level.
Firepro Incorporated
Lucht moved to Massachusetts in 1978 at which time he assumed a new position as executive vice president with the consulting firm Firepro Incorporated. While at Firepro he simultaneously worked on the startup of a new graduate degree program at nearby Worcester Polytechnic Institute.
Firepro was a full-service fire protection engineering consulting firm, offering a full range of services ranging from building fire safety and incident reconstruction to corporate fire safety management and fire department organization and deployment studies.
Worcester Polytechnic Institute
In 1978, David Lucht was recruited by Worcester Polytechnic Institute to start up the Center for Firesafety Studies. In the initial years as Professor and Director of the new Center, he worked in parallel as Executive Vice President of Firepro, Incorporated, a Boston area consulting engineering firm. He transitioned to full-time status at the university in 1985.
Starting “from scratch” at WPI, he assembled the resources, faculty, staff and laboratory facilities to support a first-of-its-kind program of graduate study in fire protection engineering. The Master of Science degree was first offered in 1979 and the PhD in 1991.
By the time he retired in 2005, WPI had graduated over 400 fire protection engineers from 26 countries. Graduates pursued careers in a host of employer settings ranging from consulting engineering firms, manufacturing industries and public utilities to product testing and research laboratories and codes and standards groups.
Nonprofit governance
Lucht served on several nonprofit boards of directors and boards of trustees.
1985 – 1991 Society of Fire Protection Engineers (SFPE)
1987 – 1989 New England Chapter SFPE
1989 – 1991 American Association of Engineering Societies
1990 – 1999 National Fire Protection Association (NFPA)
1992 – 2003 Underwriters Laboratories (UL)
1995 – 2005 Ecotarium
2004 – 2012 CTC, Inc., Public Safety Technology Center
2004 – 2009 Worcester Art Museum
2004 – 2009 Master Singers of Worcester
Honors, recognitions and awards
During his career, David Lucht was recognized for his leadership and contributions to fire safety.
1988 President's Award, SFPE Foundation
1988 Man of the Year Award, Automatic Fire Alarm Association
1989 Fellow, Society of Fire Protection Engineers
1993 Harold E. Nelson Service Award, Society of Fire Protection Engineers
2000 John J. Ahern President’s Award, Society of Fire Protection Engineers
2002 Arthur B. Guise Medal and Prize, SFPE Foundation
2004 Person of the Year Award, Automatic Fire Alarm Association
2004 William R. Grogan Award, WPI Alumni Association
2004 Person of the Year Award, New England Chapter, SFPE
2005 David A. Lucht Lamp of Knowledge Award (awarded annually by SFPE)
2006 David Rasbash Memorial Medal, Institution of Fire Engineers (London)
2013 John L. Bryan Mentor Award, Society of Fire Protection Engineers
2015 Cardinal High School Distinguished Alumni Hall of Fame, Middlefield, OH
2023 Doctor of Engineering honoris causa degree awarded by Worcester Polytechic Institute
Selected publications
Selected publications are listed below. A full listing of 65 published and 49 unpublished works can be found in the WPI David Lucht Collection
"Legal Requirements for Fire Alarms in Ohio Dwellings", Fire Journal, NFPA, March 1972.
"NFPCA Designed to Assist Local, State Governments", Fire Engineering, August 1976.
"The Federal Role Information, Training and Encouragement", Nation's Cities, March 1978.
"Fire Prevention Planning and Leadership for Small Communities", (book) published by NFPA, 1980.
"Fire Protection Engineering Graduate Program Takes Hold", Fire Journal, National Fire Protection Association, Vol. 78, No. 2, March 1984.
"Emerging Fire Technology: A Wolf in Sheep's Clothing?" Chief Fire Executive, Vol. 1, No. 1, April/May 1986.
"An Update on the WPI Graduate Program in Fire Protection Engineering", Fire Technology, Vol. 23, No. 3, August 1987.
"Coming of Age", Journal of Fire Protection Engineering, Society of Fire Protection Engineers, Vol. 1, No. 2, April, May, June 1989.
"Changing The Way We Do Business", Fire Technology, Vol. 28, No. 3, August 1992.
"Progress in Professional Practice”, Fire Protection Engineering, Society of Fire Protection Engineers, Issue No. 3, Summer 1999.
"Let’s be Intolerant of Fire Traps”, Op-Ed, Providence Journal, Providence, RI, August 5, 2003.
"Issues and Opportunities for the Future of Fire Engineering”, 2006 Rasbash Honors Lecture, IFE Fire Prevention Fire Engineers Journal, July 2006
"Millennials: The New Source of Young Talent”, Fire Protection Engineering Magazine, Society of Fire Protection Engineers, Fall 2007
"The WPI Program: Starting from Scratch”, Fire Protection Engineering Magazine, Society of Fire Protection Engineers, Issue No. 53, First Quarter, 2012.
"The Most Important Technological Breakthrough of the 20th Century”, Fire Protection Engineering Magazine, Society of Fire Protection Engineers, First Quarter, 2015.
"Symposium Review and Conclusion", Proceedings of the Society of Fire Protection Engineers Symposium on Systems Applications, University of Maryland, College Park, Maryland; March 1981.
"Report on the Conference on Firesafety Design in the 21st Century", WPI, Worcester, MA, June, 1999. Chairman and Editor.
"Proceedings of the Second Conference on Firesafety Design in the 21st Century", Worcester Polytechnic Institute, June 2000. Chairman and Editor.
"Making the Nation Safe from Fire Workshop: A Path Forward in Research”, National Research Council report, National Academy of Sciences, National Academies Press, Washington, D.C., 2003. (Editor and chair).
Artist
In the years following his retirement in 2005, David Lucht's focus shifted to the arts. He enrolled in a range of art courses at the Worcester Art Museum and understudied several local artists. He actively participated in the Princeton Arts Society Portrait Group for many years.
In 2016 Lucht was invited to paint the posthumous portrait of Philip J. DiNenno, who was President of Hughes Associates, and Fellow and Past President of SFPE when he died.
Parkinson's advocacy
David Lucht was diagnosed with Parkinson's disease in 2012 and, with time, became active in the Parkinson's health movement. He participated in Parkinson's clinical research studies at UMASS Amherst, Boston University, MIT, and Worcester State University.
As an outgrowth of the UMASS Parkinson's Voice Study, Lucht and several other clinical participants started the Parkinson's Chorus of Central Massachusetts.
External links
David Lucht Papers from the WPI Manuscript Collections.
WPI Fire Protection Engineering Program
The Society of Fire Protection Engineers
References
1943 births
American artists
Worcester Polytechnic Institute faculty
Fire protection
People from Warren, Ohio
Illinois Institute of Technology alumni
People from Shrewsbury, Massachusetts
Living people | David A. Lucht | [
"Engineering"
] | 1,975 | [
"Building engineering",
"Fire protection"
] |
50,925,207 | https://en.wikipedia.org/wiki/Uterine%20microbiome | The uterine microbiome refers to the community of commensal, nonpathogenic microorganisms—including bacteria, viruses, and yeasts/fungi—present in a healthy uterus, as well as in the amniotic fluid and endometrium. These microorganisms coexist in a specific environment within the uterus, playing a vital role in maintaining reproductive health. In the past, the uterus was believed to be a sterile environment, free of any microbial life. Recent advancements in microbiological research, particularly the improvement of 16S rRNA gene sequencing techniques, have challenged this long-held belief. These advanced techniques have made it possible to detect bacteria and other microorganisms present in very low numbers. Using this procedure that allows the detection of bacteria that cannot be cultured outside the body, studies of microbiota present in the uterus are expected to increase.
Uterine microbiome and fertility
In the past, the uterine cavity had been traditionally considered to be sterile, but potentially susceptible to be affected by vaginal bacteria. However, this idea has been disproved. Moreover, it's been shown that endometrial and vaginal microbiota can differ in structure and composition in some women.
The microbiome of the innermost layer of the uterus, the endometrium, may influence its capacity to allow an embryo to implant. The existence of more than 10% of non-Lactobacillus bacteria in the endometrium is correlated with negative impacts on reproductive function and should be considered as an emerging cause of implantation failure and pregnancy loss.
Characteristics
Bacteria, viruses and one genus of yeasts are a normal part of the uterus before and during pregnancy. The uterus has been found to possess its own characteristic microbiome that differs significantly from the vaginal microbiome, consisting primarily of lactobacillus species, and at far fewer numbers. In addition, the immune system is able to differentiate between those bacteria normally found in the uterus and those that are pathogenic. Hormonal changes have an effect on the microbiota of the uterus.
Taxa
Commensals
The organisms listed below have been identified as commensals in the healthy uterus. Some also have the potential for growing to the point of causing disease:
{| class="wikitable sortable collapsible"
|-
! Organism
! Commensal
! Transient
! Potentialpathogen
! class=unsortable| References
|-
| Escherichia coli
|align="center"|x
|
|align="center"|x
|
|-
| Escherichia spp.
|
|align="center"|x
|align="center"|x
|
|-
|Ureaplasma parvum
|align="center"|x
|
|align="center"|x
|
|-
|Fusobacterium nucleatum
|align="center"|x
|
|
|
|-
| Prevotella tannerae
| align="center"|x
| align="center"|
| align="center"|
|
|-
| Bacteroides spp.
| align="center"|x
| align="center"|
| align="center"|
|
|-
| Streptomyces avermitilis
| align="center"|x
| align="center"|
| align="center"|
|
|-
| Mycoplasma spp.
| align="center"|x
| align="center"|
| align="center"|x
|
|-
| Neisseria lactamica
| align="center"|x
| align="center"|
| align="center"|
|
|-
| Neisseria polysaccharea
| align="center"|x
| align="center"|
| align="center"|
|
|-
| Epstein–Barr virus
| align="center"|x
| align="center"|
| align="center"|x
|
|-
| Respiratory syncytial virus
| align="center"|x
| align="center"|
| align="center"|x
|
|-
| Adenovirus
| align="center"|x
| align="center"|
| align="center"|x
|
|-
| Candida spp.
| align="center"|x
| align="center"|
| align="center"|x
|
|}
Pathogens
Other taxa can be present, without causing disease or an immune response. Their presence is associated with negative birth outcomes.
{| class="wikitable sortable collapsible"
|-
! Pathogenic organism
! Increased risk of
! class=unsortable| References
|-
| Ureaplasma urealyticum
|Premature, preterm rupture of membranesPreterm laborcesarean sectionPlacental inflammationCongenital pneumoniabacteremiameningitisfetal lung injurydeath of infant
|
|-
|Ureaplasma parvum
|
| rowspan="6" |
|-
| Haemophilus influenzae
|Premature, preterm rupture of membranespreterm laborpreterm birth
|-
|Fusobacterium nucleatum
|
|-
| Prevotella tannerae
|
|-
| Bacteroides spp.
|
|-
| Streptomyces avermitilis
|
|-
| Mycoplasma hominis
|Congenital pneumoniabacteremiameningitis<pelvic inflammatory diseasepostpartum or postabortal fever
|
|-
| Neisseria lactamica
| rowspan="2" |
| rowspan="6" |
|-
| Neisseria polysaccharea
|-
| Epstein–Barr virus
|
|-
| Respiratory syncytial virus
|
|-
| Adenovirus
|
|-
| Candida spp.
|
|-
|Atopobium spp.
| rowspan="9" |Unsuccessful reproductive outcomes in infertile patients (no pregnancy or clinical miscarriage)
| rowspan="9" |
|-
|Bifidobacterium spp.
|-
|Chryseobacterium spp.
|-
|Gardnerella spp.
|-
|Klebsiella spp.
|-
|Staphylococcus spp.
|-
|Haemophilus spp.
|-
|Streptomyces spp.
|-
|Neisseria spp.
|}
Clinical significance
Prophylactic antibiotics have been injected into the uterus to treat infertility. This has been done before the transfer of embryos with the intent to improve implantation rates. No association exists between successful implantation and antibiotic treatment. Infertility treatments often progress to the point where a microbiological analysis of the uterine microbiota is performed. Preterm birth is associated with certain species of bacteria that are not normally part of the healthy uterine microbiome.
The uterine microbiome appears to be altered in female patients who experience endometrial cancer, endometriosis, chronic endometriosis, and related gynecological pathologies, suggesting the clinical relevance of the uterine microbiome’s composition. Next-generation sequencing has revealed the presence of certain bacterial taxa, such as Alteromonas, to be present in patients presenting with gynecological conditions.
Clinically speaking, there is no universal protocol on how to treat uterine dysbiosis. However, use of antibiotics has been widespread. In the context of infertility, researchers have studied the effects of a treatment plan of antibiotics in conjunction with prebiotics and probiotics to increase Lactobacillus colonization in the endometrium. It was found that, while there was a Lactobacillus-dominated endometrium correlated with increased pregnancy rates, the data was not statistically significant. Antibiotics have also been used to treat chronic endometritis and endometriosis.
Interestingly, a link between the oral microbiome and the uterine microbiome has been uncovered. Fusobacterium nucleatum, a Gram-negative bacteria commensal to the oral microbiome, is associated with periodontal disease and has been linked with a wide variety of health outcomes, including unfavorable pregnancy outcomes.
Immune response
The immune response becomes more pronounced when bacteria are found that are not commensal.
History
Investigations into reproductive-associated microbiomes began around 1885 by Theodor Escherich. He wrote that meconium from the newborn was free of bacteria. There was a general consensus at the time and even recently that the uterus was sterile and this was referred to as the sterile womb paradigm. Other investigations used sterile diapers for meconium collection. No bacteria were able to be cultured from the samples. Other studies showed that bacteria were detected and were directly proportional to the time between birth and the passage of meconium.
Research
Investigations into the role of the uterine microbiome in the development of the infant microbiome are ongoing. In recent years, the number of articles and review publications discussing the uterine microbiome has grown. Based on a Web of Science analysis, the highest number of documents published on the topic was in 2023, with a total of 23 papers.
The Daunert Lab, based at the University of Miami’s Sylvester Comprehensive Cancer Center, focuses on the role of the microbiome in endometrial cancer and the role the uterine microbiome plays in the success of an IVF cycle. Similarly, Dr. Maria Walther-Antonio’s lab at the Mayo Clinic focuses on the microbiome’s role in endometrial cancer. Notably, Dr. Walther- Antonio has confirmed that Porphyromas somerae is able to invade endometrial cells, indicating a possibility that this microbe contributes to the pathogenesis of endometrial cancer.
The Carlos Simon Foundation, based in Valencia, Spain, is an women’s health research organization founded by reproductive endocrinologist Carlos Simon, MD PhD. A research team led by Dr. Inmaculada Moreno at the Carlos Simon Foundation studies the role of the endometrial microbiome in human reproduction. When research on the uterine microbiome was scarce, Dr. Moreno and her team analyzed the endometrial microbiota and discovered that there was a correlation between certain endometrial microbiota compositions and the outcome of implantation success or failure. Six years later, they followed up with a paper revealing that specific pathogenic bacteria and depletion of Lactobacillus spp. in the endometrium correlated with impaired fertility.
See also
Human microbiome
Human Microbiome Project
Human virome
List of antimicrobial peptides in the female reproductive tract
List of bacterial vaginosis microbiota
Placental microbiome
Vaginal epithelium
Vaginal flora in pregnancy
References and notes
Bacteriology
Bacteria
Uterus
Microbiology
Gynaecology
Microbiomes
Reproduction
Fertility
Women's health | Uterine microbiome | [
"Chemistry",
"Biology",
"Environmental_science"
] | 2,330 | [
"Behavior",
"Reproduction",
"Biological interactions",
"Prokaryotes",
"Microbiology",
"Bacteria",
"Microscopy",
"Microbiomes",
"Environmental microbiology",
"Microorganisms"
] |
50,928,447 | https://en.wikipedia.org/wiki/Concentric%20reducer | A concentric reducer is used to join pipe sections or tube sections on the same axis. The concentric reducer is cone-shaped, and is used when there is a shift in diameter between pipes. For example, when a 1" pipe transitions into a 3/4" pipe and the top or bottom of the pipe doesn't need to remain level. This pipe reducer may be used when there is a single diameter change or multiple diameter changes.
Unlike eccentric reducers, concentric reducers have a common center line. Concentric reducers are useful when cavitation is present.
Eccentricity occurs when the centerline is offset.
See also
Piping and plumbing fitting
Reducer
References
Piping
Pumps
Plumbing | Concentric reducer | [
"Physics",
"Chemistry",
"Engineering"
] | 147 | [
"Pumps",
"Turbomachinery",
"Building engineering",
"Chemical engineering",
"Plumbing",
"Physical systems",
"Construction",
"Hydraulics",
"Mechanical engineering",
"Piping"
] |
60,084,611 | https://en.wikipedia.org/wiki/CITE-Seq | CITE-Seq (Cellular Indexing of Transcriptomes and Epitopes by Sequencing) is a method for performing RNA sequencing along with gaining quantitative and qualitative information on surface proteins with available antibodies on a single cell level. So far, the method has been demonstrated to work with only a few proteins per cell. As such, it provides an additional layer of information for the same cell by combining both proteomics and transcriptomics data. For phenotyping, this method has been shown to be as accurate as flow cytometry (a gold standard) by the groups that developed it. It is currently one of the main methods, along with REAP-Seq, to evaluate both gene expression and protein levels simultaneously in different species.
The method was established by the New York Genome Center in collaboration with the Satija lab., while a similar approach was earlier shown by AbVitro Inc..
Applications
Concurrent measurement of both protein and transcript levels opens up opportunities to use CITE-Seq in various biological areas, some of which were touched upon by the developers. For instance, it may be used to characterize tumor heterogeneity in different cancers, a major research field. It also permits identifying rare subpopulations of cells as a high-throughput single-cell method and thus detect information otherwise lost with bulk methods. It also may aid in tumor classification - for example, identification of novel subtypes. All of the above are possible due to single-cell output of both protein and transcript data at the same time, also leading to novel information on protein-RNA correlation.
It also has potential in immunology. For example, it can be utilized for immune cell characterization – recent research on T-cells has investigated the ability of T cells to maintain an effector state. Another study by one of CITE-Seq coauthors suggested CITE-Seq as a methods to look at the mechanisms of host-pathogen interactions.
Workflow
CITE-seq, like any other sequencing technique, has a wet lab portion, where the actual antibodies are prepared, cells stained, cDNA synthesized and RNA libraries are prepared that are further sequenced, and a dry lab portion for analysis of the sequencing data obtained. The most crucial part in the wet lab experiments is designing the antibody-oligonucleotide conjugates and titrating the amount of each conjugate that needs to be present in the pool to achieve a desired read-out and quantification.
Wet lab workflow
The first step involves preparation of the antibody-oligo conjugates also known as Antibody-Derived Tags (ADTs). ADT preparation involves labeling an antibody directed against a cell surface protein of interest with oligonucleotides for barcoding the antibody.
Once you have the ADTs, the next step is to bind the cells with the desired ADT pool. The scRNA-seq libraries can be prepared using Drop-seq, 10X Genomics or ddSeq methods. In brief, ADT labelled cells are encapsulated within a droplet as single cells with DNA-barcoded microbeads.
Within a droplet, the cells are next lysed to release both bound ADTs as well as mRNA. These then are converted to cDNA. Each DNA sequence on a microbead has a unique barcode thus indexing cDNA with cell barcodes. cDNA is prepared from both ADTs and cellular mRNAs.
In the next step, based on the developer's guidelines, cDNA is PCR-amplified and ADT cDNA and mRNA cDNA are separated based on size (generally, ADT-derived cDNAs are < 180bp and mRNA-derived cDNAs are > 300bp). Each of the separated cDNA molecules is independently amplified and purified to prepare sequencing libraries. Finally, the independent libraries are pooled together and sequenced. Thus, proteomics and transcriptomics data can be obtained from a single sequencing run.
Dry lab workflow
Analysis of single-cell sequencing presents many challenges, such as determining the best way to normalize the data. Due to a new level of complications that arise from sequencing of both proteins and transcripts at a single-cell level, the developers of CITE-Seq and their collaborators are maintaining several tools to help with data analysis.
scRNA-Seq data analysis based on the developer's guidelines: The initial analysis steps are the same as in a standard scRNA-Seq experiment. Firstly, reads need to be aligned to a reference genome of a species of interest and cells with very low number of transcripts mapped to the reference are removed. Finally, a normalized count matrix with gene expression values is obtained.
ADT data analysis (based on the developer's guidelines): CITE-seq-Count is a Python package from CITE-Seq developers that can be used to obtain raw counts. Seurat package from Satija lab further allows combining of the protein and RNA counts and performing clustering on both measurements, as well as doing differential expression analysis between cell clusters of interest. ADT quantification needs to take into account the differences between the antibodies. Additionally, filtering may be required to reduce noise, similarly to scRNA-Seq analysis. But in contrast to RNA data, due to higher amounts of protein in a cell, there is less dropout.
The analyses may result in identification of novel cell clusters through such methods as PCA or tSNE, crucial genes responsible for a specific cell function and other new knowledge specific to a question of interest. In general, the results obtained with ADT counts substantially increase the amount of information obtained through single cell transcriptomics.
Adaptations of the technique
The applications of antibody-oligonucleotide conjugates have expanded beyond CITE-seq, and can be adapted for sample multiplexing as well as CRISPR screens.
Cell Hashing: New York Genome Center further adapted the use of their antibody-oligonucleotide conjugates to enable sample multiplexing for scRNA-seq. This technique called, Cell Hashing, uses oligonucleotide-labelled antibodies against ubiquitously expressed cell surface proteins from a particular tissue sample. In this case, an oligonucleotide sequence contains a unique barcode which would be specific to cells from distinct samples. This sample-specific cell tagging allows pooling of the sequencing libraries prepared from different samples on a sequencing platform. Sequencing the antibody tags along with the cellular transcriptome helps identify a sample of origin for each analyzed cell. A unique barcode sequence used on the cell hashing antibody can be designed to be different from an antibody barcode present on the ADTs used in CITE-seq. This makes it possible to couple cell hashing with CITE-seq on a single sequencing run. Cell hashing allows super-loading of the scRNA-seq platform, resulting in a lower cost of sequencing. It also enables detection of artifactual signals from multiplets, a major challenge in scRNA-seq. The cell hashing method has further been used by Gaublomme et al. to multiplex single-nucleus RNA-seq (snRNA-seq) by performing nucleus hashing.
ECCITE-seq: Expanded CRISPR-compatible Cellular Indexing of Transcriptomes and Epitopes by sequencing or ECCITE-seq was developed to apply the use of CITE-seq to characterize multiple modalities from a single cell. By modifying the basic CITE-seq protocol to a 5' tag-based scRNA-seq assay, it can detect transcriptome, immune receptor clonotypes, surface markers, sample identity and single guide RNAs (sgRNAs) from each single cell. The ability of ECCITE-seq to detect sgRNA molecules and measure their effect on gene expression levels opens a prospect of applying this technique in CRISPR screens.
Advantages and Limitations of CITE-seq
Advantages: CITE-seq enables simultaneous analysis of the transcriptome as well as the proteome of single cells. Previous efforts of coupling index-sorting measurements from single cell sorts with scRNA-seq were limited to running a small sample size and were not compatible with multiplexing and massive parallel high-throughput sequencing. CITE-seq has been shown to be compatible with high-throughput microfluidic platforms like 10X Genomics and Drop-seq. It is also adaptable to micro/nano-well platforms. Coupling it with cell hashing enables the application of CITE-seq on bulk samples and sample multiplexing. These techniques work to reduce an overall cost of high-throughput sequencing on multiple samples. Lastly, CITE-seq can be adapted to detect small molecules, RNA interference, CRISPR, and other gene editing techniques.
Limitations: One of the limitations of CITE-Seq is a loss of location information. Due to the way the cells are treated, the spatial distribution of cells within a sample, as well as proteins within a cell is not known. In addition, this method shares the challenges of scRNA-Seq, such as high amount of noise and possible challenges in detecting lowly expressed genes. In terms of phenotyping, optimization of the assay and antibodies also presents a potential problem if proteins of interest are not included in the currently available panels. Moreover, right now CITE-Seq is not able to detect intracellular proteins. With the current protocol, there are many challenges that would arise during the permeabilization step, thus limiting the technique to surface markers.
Alternative methods
REAP-seq: Peterson et al. from Merck developed a technique similar to CITE-seq called RNA Expression and Protein Sequencing assay (REAP-seq). While REAP-seq, similarly to CITE-seq, measures levels of both transcripts and proteins in a single cell, the difference between the two techniques is how the antibody is conjugated to the oligonucleotides. CITE-seq typically links the oligonucleotide to the antibody non-covalently, via streptavidin conjugation to the antibody and biotin conjugation to the oligonucleotide. REAP-seq covalently links the antibody and an aminated DNA barcode
PLAYR: PLAYR or Proximal Ligation Assay for RNA makes use of mass spectrometry to simultaneously analyse the transcriptome and protein levels in single cells. In this technique both the proteins and RNA transcripts are labelled with isotope-conjugated antibodies and isotope-labelled probes, respectively, enabling their detection on a mass spectrometer
References
RNA sequencing | CITE-Seq | [
"Chemistry",
"Biology"
] | 2,206 | [
"Genetics techniques",
"RNA sequencing",
"Molecular biology techniques"
] |
60,084,964 | https://en.wikipedia.org/wiki/Human%E2%80%93robot%20collaboration | Human-Robot Collaboration is the study of collaborative processes in human and robot agents work together to achieve shared goals. Many new applications for robots require them to work alongside people as capable members of human-robot teams. These include robots for homes, hospitals, and offices, space exploration and manufacturing. Human-Robot Collaboration (HRC) is an interdisciplinary research area comprising classical robotics, human-computer interaction, artificial intelligence, process design, layout planning, ergonomics, cognitive sciences, and psychology.
Industrial applications of human-robot collaboration involve Collaborative Robots, or cobots, that physically interact with humans in a shared workspace to complete tasks such as collaborative manipulation or object handovers.
Collaborative Activity
Collaboration is defined as a special type of coordinated activity, one in which two or more agents work jointly with each other, together performing a task or carrying out the activities needed to satisfy a shared goal. The process typically involves shared plans, shared norms and mutually beneficial interactions. Although collaboration and cooperation are often used interchangeably, collaboration differs from cooperation as it involves a shared goal and joint action where the success of both parties depend on each other.
For effective human-robot collaboration, it is imperative that the robot is capable of understanding and interpreting several communication mechanisms similar to the mechanisms involved in human-human interaction. The robot must also communicate its own set of intents and goals to establish and maintain a set of shared beliefs and to coordinate its actions to execute the shared plan. In addition, all team members demonstrate commitment to doing their own part, to the others doing theirs, and to the success of the overall task.
Theories Informing Human-Robot Collaboration
Human-human collaborative activities are studied in depth in order to identify the characteristics that enable humans to successfully work together. These activity models usually aim to understand how people work together in teams, how they form intentions and achieve a joint goal. Theories on collaboration inform human-robot collaboration research to develop efficient and fluent collaborative agents.
Belief Desire Intention Model
The belief-desire-intention (BDI) model is a model of human practical reasoning that was originally developed by Michael Bratman. The approach is used in intelligent agents research to describe and model intelligent agents. The BDI model is characterized by the implementation of an agent's beliefs (the knowledge of the world, state of the world), desires (the objective to accomplish, desired end state) and intentions (the course of actions currently under execution to achieve the desire of the agent) in order to deliberate their decision-making processes. BDI agents are able to deliberate about plans, select plans and execute plans.
Shared Cooperative Activity
Shared Cooperative Activity defines certain prerequisites for an activity to be considered shared and cooperative: mutual responsiveness, commitment to the joint activity and commitment to mutual support. An example case to illustrate these concepts would be a collaborative activity where agents are moving a table out the door, mutual responsiveness ensures that movements of the agents are synchronized; a commitment to the joint activity reassures each team member that the other will not at some point drop his side; and a commitment to mutual support deals with possible breakdowns due to one team member’s inability to perform part of the plan.
Joint Intention Theory
Joint Intention Theory proposes that for joint action to emerge, team members must communicate to maintain a set of shared beliefs and to coordinate their actions towards the shared plan. In collaborative work, agents should be able to count on the commitment of other members, therefore each agent should inform the others when they reach the conclusion that a goal is achievable, impossible, or irrelevant.
Approaches to Human-Robot Collaboration
The approaches to human-robot collaboration include human emulation (HE) and human complementary (HC) approaches. Although these approaches have differences, there are research efforts to develop a unified approach stemming from potential convergences such as Collaborative Control.
Human Emulation
The human emulation approach aims to enable computers to act like humans or have human-like abilities in order to collaborate with humans. It focuses on developing formal models of human-human collaboration and applying these models to human-computer collaboration. In this approach, humans are viewed as rational agents who form and execute plans for achieving their goals and infer other people's plans. Agents are required to infer the goals and plans of other agents, and collaborative behavior consists of helping other agents to achieve their goals.
Human Complementary
The human complementary approach seeks to improve human-computer interaction by making the computer a more intelligent partner that complements and collaborates with humans. The premise is that the computer and humans have fundamentally asymmetric abilities. Therefore, researchers invent interaction paradigms that divide responsibility between human users and computer systems by assigning distinct roles that exploit the strengths and overcome the weaknesses of both partners.
Key Aspects
Specialization of Roles: Based on the level of autonomy and intervention, there are several human-robot relationships including master-slave, supervisor–subordinate, partner–partner, teacher–learner and fully autonomous robot. In addition to these roles, homotopy (a weighting function that allows a continuous change between leader and follower behaviors) was introduced as a flexible role distribution.
Establishing shared goal(s): Through direct discussion about goals or inference from statements and actions, agents must determine the shared goals they are trying to achieve.
Allocation of Responsibility and Coordination: Agents must decide how to achieve their goals, determine what actions will be done by each agent, and how to coordinate the actions of individual agents and integrate their results.
Shared context: Agents must be able to track progress toward their goals. They must keep track of what has been achieved and what remains to be done. They must evaluate the effects of actions and determine whether an acceptable solution has been achieved.
Communication: Any collaboration requires communication to define goals, negotiate over how to proceed and who will do what, and evaluate progress and results.
Adaptation and learning: Collaboration over time require partners to adapt themselves to each other and learn from one's partner both directly or indirectly.
Time and space: The time-space taxonomy divides human-robot interaction into four categories based on whether the humans and robots are using computing systems at the same time (synchronous) or different times (asynchronous) and while in the same place (collocated) or in different places (non-collocated).
Ergonomics: Human factors and ergonomics are one of the key aspects for a sustainable human-robot collaboration. The robot control system can use biomechanical models and sensors to optimize various ergonomic metrics, such as muscle fatigue.
See also
Industrial Robot
Collaborative Robot
Human-Robot Interaction
Computer Supported Collaboration
Collective Intentionality
References
External links
https://www.euronews.com/2018/01/29/the-future-of-work-human-robot-collaboration
https://www.kuka.com/en-us/technologies/human-robot-collaboration
Industrial robots
Robotics
Human–computer interaction | Human–robot collaboration | [
"Engineering"
] | 1,408 | [
"Automation",
"Industrial robots",
"Robotics",
"Human–machine interaction",
"Human–computer interaction"
] |
60,087,753 | https://en.wikipedia.org/wiki/Fr%C3%B6hlich%20effect | The Fröhlich effect is a visual illusion wherein the first position of a moving object entering a window is misperceived. When observers are asked to localize the onset position of the moving target, they typically make localization errors in the direction of movement ("ahead" of its true localization).
A proposed explanation for this effect is that the visual system is predictive, accounting for neural delays by extrapolating the trajectory of a moving stimulus into the future. In other words, when light from a moving object hits the retina, a certain amount of time is required before the object is perceived. In that time, the object has moved to a new location in the world. The motion extrapolation hypothesis asserts that the visual system will take care of such delays by extrapolating the position of moving objects forward in time. As such it is related to the flash lag illusion.
See also
Tau effect
Kappa effect
Cutaneous rabbit illusion
Temporal illusions
Flash lag illusion
References
Optical illusions | Fröhlich effect | [
"Physics"
] | 204 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
60,087,947 | https://en.wikipedia.org/wiki/2018%20AG37 | is a distant trans-Neptunian object and centaur that was discovered from the Sun, farther than any other currently observable known object in the Solar System. Imaged in January 2018 during a search for the hypothetical Planet Nine, the confirmation of this object was announced in a press release in February 2021 by astronomers Scott Sheppard, David Tholen, and Chad Trujillo. The object was nicknamed "FarFarOut" to emphasize its distance from the Sun.
At the faint apparent magnitude of 25, only the largest telescopes in the world can observe it. Being so far from the Sun, moves slowly among the background stars and has been observed only nine times in the first two years. It requires an observation arc of several years to refine the uncertainties in the approximately 700-year orbital period and determine whether it is currently near or at aphelion (farthest distance from the Sun). JPL Horizons computes an aphelion around the year 2005 at about 133 AU, whereas Project Pluto computes aphelion around the year 1976 slightly further out at 134 AU. Its perihelion is a little less than that of Neptune.
Discovery
was first imaged on 15 January 2018 by astronomers Scott Sheppard, David Tholen, and Chad Trujillo when they were surveying the sky using the large 8.2-meter Subaru Telescope at Mauna Kea Observatory, Hawaii, to find distant Solar System objects and the hypothetical Planet Nine, whose existence they proposed in 2014. However, it was not noticed until January 2019, when Sheppard reviewed the Subaru images taken in 2018 after having an upcoming lecture delayed by weather. In two of these images taken one day apart in January, he identified a faint apparent magnitude 25.3 object that moved slowly relative to the background stars and galaxies. Based on two positions of in those images, Sheppard estimated its distance was roughly around 140 astronomical units (AU), farther than which was discovered and announced by his team one month earlier in December 2018.
In his rescheduled talk on 21 February 2019, Sheppard remarked on his discovery of , which he jokingly nicknamed "FarFarOut" as a succession to the nickname "Farout" used for the previous farthest object . Following 's discovery, Sheppard reobserved the object in March 2019 with the 6.5-meter Magellan-Baade telescope at Las Campanas Observatory, Chile. Additional observations were then made in May 2019 and January 2020 with the Subaru Telescope at Mauna Kea.
These observations over a two-year period established a tentative orbit solution for , permitting it to be confirmed and announced by the Minor Planet Center. The confirmation of was formally announced in a press release by the Carnegie Institution for Science on 10 February 2021.
Name
The object was nicknamed "FarFarOut" for its distant location from the Sun, and particularly because it was even farther than the previous farthest known object which was nicknamed "Farout". It is officially known by the provisional designation given by the Minor Planet Center when the discovery was announced. The provisional designation indicates the object's discovery date, with the first letter representing the first half of January and the succeeding letter and numbers indicating that it is the 932nd object discovered during that half-month.
The object has not yet been assigned an official minor planet number by the Minor Planet Center due to its short observation arc and orbital uncertainty. will be given a minor planet number when its orbit is well-secured by observations over multiple opposition and will become eligible for naming by its discoverers after it is numbered with a well-defined orbit.
Orbit
, has been observed nine times over an observation arc of two years. Being so far from the Sun, moves so slowly that two years of observations have not adequately determined its orbit. The nominal orbit is highly uncertain with a condition code of 9. Several years of additional observations are necessary to refine the orbital uncertainties. It comes to opposition each January.
Only 's distance and orbital elements that define its position (inclination and longitude of the ascending node) have been adequately determined by its two-year observation arc. The orbital elements that define the shape and motion of 's orbit (eccentricity, mean anomaly, etc.) are poorly determined because its observation arc does not provide sufficient coverage of its wide-ranging orbit, especially when it moves slowly due to its large distance. The nominal best-fit orbit solution provided by the Jet Propulsion Laboratory (JPL) Small-Body Database gives an orbital semi-major axis of and an eccentricity of , corresponding to a perihelion and aphelion distance of and , respectively. The orbital period of is poorly known, but it probably lies around 700 years.
Given the uncertainty of 's nominal perihelion distance, it likely crosses Neptune's orbit (30.1 AU) with a nominal minimum orbit intersection distance (MOID) around . 's small perihelion distance and elongated orbit implies that it has experienced strong gravitational interactions with Neptune in past close encounters. Other trans-Neptunian objects are known to have been scattered onto similarly distant and elongated orbits by Neptune—these are collectively known as scattered disc objects.
Distance
The object was initially estimated to be roughly from the Sun, but this estimate was uncertain due to the very short initial observation arc. When it was announced in February 2021, had an observation arc of two years. Based on this, it was from the Sun at the time of its discovery on 15 January 2018. , it is the farthest observed object in the Solar System.
However, over a hundred trans-Neptunian objects are known to have aphelion distances that bring them farther from the Sun than and many near-parabolic comets are currently much farther from the Sun. Comet Donati (C/1858 L1) is over , and Caesar's Comet (C/-43 K1) is calculated to be more than from the Sun. However, none of these more distant objects are currently observable even with the most powerful telescopes.
Physical characteristics
Based on 's apparent brightness and projected distance, the Minor Planet Center calculates an absolute magnitude of 4.2. It is listed by the Minor Planet Center as the 12th intrinsically brightest known scattered disc object.
The size of is unmeasured, but it likely lies between in diameter assuming a geometric albedo range of 0.10–0.25. Sheppard estimates that 's diameter lies at the lower end of this range, as he concludes that it has a highly reflective and ice-rich surface. Johnston assumes a dark albedo of 0.057 and a larger diameter of , and classifies as a centaur. If correct, that would make it the largest known centaur.
See also
, the next most distant known object discovered in 2018, nicknamed "Farout"
, the third most distant known object discovered by Sheppard's team in 2020
, the fourth most distant known object discovered by Sheppard's team in 2020
List of Solar System objects most distant from the Sun
Notes
References
External links
Solar System's Most Distant Known Member Confirmed, Carnegie Institution for Science, 10 February 2021
Astronomers Confirm Solar System's Most Distant Known Object Is Indeed Farfarout, NOIRLab, 10 February 2021
Record Breaking Distant Solar-System Object, Subaru Telescope/NAOJ, 10 February 2021
'Farfarout!' Solar system's most distant planetoid confirmed, University of Hawai'i News, 10 February 2021
"Beyond Pluto: The Hunt for a Massive Planet X", a talk by Sheppard announcing FarFarOut's discovery, Carnegie Institution for Science, 21 February 2019
Minor planet object articles (unnumbered)
20180115
SUS | 2018 AG37 | [
"Physics",
"Astronomy"
] | 1,565 | [
"Concepts in astronomy",
"Unsolved problems in astronomy",
"Possible dwarf planets"
] |
60,088,733 | https://en.wikipedia.org/wiki/Photomultiplier | A photomultiplier is a device that converts incident photons into an electrical signal.
Kinds of photomultiplier include:
Photomultiplier tube, a vacuum tube converting incident photons into an electric signal. Photomultiplier tubes (PMTs for short) are members of the class of vacuum tubes, and more specifically vacuum phototubes, which are extremely sensitive detectors of light in the ultraviolet, visible, and near-infrared ranges of the electromagnetic spectrum.
Magnetic photomultiplier, developed by the Soviets in the 1930s.
Electrostatic photomultiplier, a kind of photomultiplier tube demonstrated by Jan Rajchman of RCA Laboratories in Princeton, NJ in the late 1930s which became the standard for all future commercial photomultipliers. The first mass-produced photomultiplier, the Type 931, was of this design and is still commercially produced today.
Silicon photomultiplier, a solid-state device converting incident photons into an electric signal. Silicon photomultipliers, often called "SiPM" in the literature, are solid-state single-photon-sensitive devices based on Single-photon avalanche diode (SPAD) implemented on common silicon substrate.
References
Particle detectors | Photomultiplier | [
"Technology",
"Engineering"
] | 261 | [
"Particle detectors",
"Measuring instruments"
] |
60,089,520 | https://en.wikipedia.org/wiki/Spherical%20Bernstein%27s%20problem | The spherical Bernstein's problem is a possible generalization of the original Bernstein's problem in the field of global differential geometry, first proposed by Shiing-Shen Chern in 1969, and then later in 1970, during his plenary address at the International Congress of Mathematicians in Nice.
The problem
Are the equators in the only smooth embedded minimal hypersurfaces which are topological -dimensional spheres?
Additionally, the spherical Bernstein's problem, while itself a generalization of the original Bernstein's problem, can, too, be generalized further by replacing the ambient space by a simply-connected, compact symmetric space. Some results in this direction are due to Wu-Chung Hsiang and Wu-Yi Hsiang work.
Alternative formulations
Below are two alternative ways to express the problem:
The second formulation
Let the (n − 1) sphere be embedded as a minimal hypersurface in (1). Is it necessarily an equator?
By the Almgren–Calabi theorem, it's true when n = 3 (or n = 2 for the 1st formulation).
Wu-Chung Hsiang proved it for n ∈ {4, 5, 6, 7, 8, 10, 12, 14} (or n ∈ {3, 4, 5, 6, 7, 9, 11, 13}, respectively)
In 1987, Per Tomter proved it for all even n (or all odd n, respectively).
Thus, it only remains unknown for all odd n ≥ 9 (or all even n ≥ 8, respectively)
The third formulation
Is it true that an embedded, minimal hypersphere inside the Euclidean -sphere is
necessarily an equator?
Geometrically, the problem is analogous to the following problem:
Is the local topology at an isolated singular point of a minimal hypersurface necessarily different from that of a disc?
For example, the affirmative answer for spherical Bernstein problem when n = 3 is equivalent to the fact that the local topology at an isolated singular point of any minimal hypersurface in an arbitrary Riemannian 4-manifold must be different from that of a disc.
Further reading
F.J. Almgren, Jr., Some interior regularity theorems for minimal surfaces and an extension of the Bernstein's theorem, Annals of Mathematics, volume 85, number 1 (1966), pp. 277–292
E. Calabi, Minimal immersions of surfaces in euclidean spaces, Journal of Differential Geometry, volume 1 (1967), pp. 111–125
P. Tomter, The spherical Bernstein problem in even dimensions and related problems, Acta Mathematica, volume 158 (1987), pp. 189–212
S.S. Chern, Brief survey of minimal submanifolds, Tagungsbericht (1969), Mathematisches Forschungsinstitut Oberwolfach
S.S. Chern, Differential geometry, its past and its future, Actes du Congrès international des mathématiciens (Nice, 1970), volume 1, pp. 41–53, Gauthier-Villars, (1971)
W.Y. Hsiang, W.T. Hsiang, P. Tomter, On the existence of minimal hyperspheres in compact symmetric spaces, Annales Scientifiques de l'École Normale Supérieure, volume 21 (1988), pp. 287–305
Mathematical problems
Unsolved problems in geometry
Differential geometry | Spherical Bernstein's problem | [
"Mathematics"
] | 704 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Mathematical problems",
"Unsolved problems in geometry"
] |
60,090,599 | https://en.wikipedia.org/wiki/ANNOVAR | ANNOVAR (ANNOtate VARiation) is a bioinformatics software tool for the interpretation and prioritization of single nucleotide variants (SNVs), insertions, deletions, and copy number variants (CNVs) of a given genome.
It has the ability to annotate human genomes hg18, hg19, hg38, and model organisms genomes such as: mouse (Mus musculus), zebrafish (Danio rerio), fruit fly (Drosophila melanogaster), roundworm (Caenorhabditis elegans), yeast (Saccharomyces cerevisiae) and many others. The annotations could be used to determine the functional consequences of the mutations on the genes and organisms, infer cytogenetic bands, report functional importance scores, and/or find variants in conserved regions. ANNOVAR along with SNP effect (SnpEFF) and Variant Effect Predictor (VEP) are three of the most commonly used variant annotation tools.
Background
The cost of high throughput DNA sequencing has reduced drastically from around $100 million/human genome in 2001 to around $1000/human genome in 2017. Due to this increase in accessibility, high throughput DNA sequencing has become more widely used in research and clinical settings. Some common areas that utilize high throughput DNA sequencing extensively are: Whole Exome Sequencing, Whole Genome Sequencing (WGS), and genome wide association studies (GWAS).
There are a growing number of tools available seeking to comprehensively manage, analyze and interpret the enormous amount of data generated from high-throughput DNA sequencing. The tools are required to be efficient and robust enough to analyze a large number of variants (more than 3 million in human genome) though sensitive enough to identify rare and clinically relevant variants that are likely harmful/deleterious. ANNOVAR was developed by Kai Wang in 2010 at the Center for Applied Genomics in Children's Hospital of Philadelphia. It is a type of variant annotation tool that compiles deleterious genetic variant prediction scores from programs such as PolyPhen, ClinVar, and CADD and annotates the SNVs, insertions, deletions, and CNVs of the provided genome. ANNOVAR is one of the first efficient, configurable, extensible and cross-platform compatible variant annotation tools created.
In terms of the larger bioinformatics workflow, ANNOVAR fits in near the end, after DNA sequencing reads having between mapped, aligned, and variants have been predicted from an alignment file (BAM), also known as variant calling. This process will produce a resultant VCF file, a tab-separated text file in a tabular like structure, containing genetic variants as rows. This file can then be used as input into the ANNOVAR software program for the variant annotation process, outputting interpretations of the variants identified from the upstream bioinformatics pipeline.
Types of functional annotation of genetic variants
Gene-based annotation
This approach identifies whether the input variants cause protein coding changes and the amino acids that are affected by the mutations. The input file can be composed of exons, introns, intergenic regions, splice acceptor/donor sites, and 5′/3′ untranslated regions. The focus is to explore the relationship between non-synonymous mutations (SNPs, indels, or CNVs) and their functional impact on known genes. Especially, gene-based annotation will highlight the exact amino acid change if the mutation is in the exonic region and the predicted effect on the function of the known gene. This approach is useful for identifying variants in known genes from Whole Exome Sequencing data.
Region-based annotation
This approach identifies deleterious variants in specific genomic regions based on the genomic elements around the gene. Some categories region-based annotation will take into account are:
Is the variant in a known conserved genomic region?: Mutations occur during mitosis and meiosis. If there is no selective pressure for specific nucleotide sequences, then all areas of a genome would be mutated are equal rates. The genomic regions that are highly conserved indicate genomic sequences that are essential to the organism's survival and/or reproductive success. Thus, if the variant disrupts a highly conserved region, the variant is likely highly deleterious.
Is the variant in a predicted transcription factor binding site?: DNA is transcribed into messenger RNA (mRNA) by RNA polymerase II. This process can be modulated transcription factors which can enhance or inhibit binding of RNApol II. If the variant disrupts a transcription factor binding site then transcription of the gene could be altered causing changes in gene expression level and/or protein production amount. This changes could cause phenotypic variations.
Is the variant in a predicted miRNA target site?: MicroRNA (miRNA) is a type of RNA that complementary binds to targeted mRNA sequence to suppress or silence the translation of the mRNA. If the variant disrupts the miRNA target location, the miRNA could have altered binding affinity to the corresponding gene transcript thus changing the mRNA expression level of the transcript. This could further impact protein production levels which could cause phenotypic variations.
Is the variant predicted to interrupt a stable RNA secondary structure?: RNA can function at the RNA level as non-coding RNA or be translated into proteins for downstream processes. RNA secondary structures are extremely important in determining the correct half-life and function of those RNA. Two RNA species with tightly regulated secondary structures are ribosomal RNA (rRNA) and transfer RNA (tRNA) which are essential in translation of mRNA to protein. If the variant disrupts the stability of the RNA secondary structure, the half-life of the RNA could be shortened thus lowering the concentration of RNA in the cell.
Non-coding regions encompasses 99% of the human genome and region-based annotation is extremely useful in identifying variants in those regions. This approach can be used on WGS data.
Filter-based annotation
This approach identifies variants that are documented in specific databases. The variants could be obtained from dbSNP, 1000 Genomes Project, or user-supplied list. Additional information could be obtained from the frequency of the variants from the above databases or the predicted deleterious scores created by PolyPhen, CADD, ClinVar or many others. The more infrequent a variant appears in the public database, the more deleterious it is likely to be. Results from different deleterious score prediction tools can combined together by the researcher to make a more accurate call on the variant.
Taken together, these approaches complement one another to filter through over 4 million variants in a human genome. Common, low-deleterious score variants are eliminated to reveal the rare, high-deleterious score variants which could be causal for congenital diseases.
Technical information
ANNOVAR is a command-line tool written in the Perl programming language and can be run on any operating system that has a Perl interpreter installed. If used for non-commercial purposes, it is available free as an open-source package that is downloadable through the ANNOVAR website. ANNOVAR can process most next-generation sequencing data which has been run through a variant calling software.
File formats
The ANNOVAR software accepts text-based input files, including VCF (Variant Call Format), the gold standard for describing genetic loci.
The program's main annotation script, annotate_variation.pl requires a custom input file format, the ANNOVAR input format (.avinput). Common file types can be converted to ANNOVAR input format for annotation using a provided script (see below). It is a simple text file where each line in the file corresponds to a variant and within each line are tab-delimited columns representing the basic genomic coordinate fields (chromosome, start position, end position, reference nucleotides, and observed nucleotides), followed by optional columns
The ANNOVAR file input contains the following basic fields:
Chr
Start
End
Ref
Alt
For basic "out-of-the-box" usage:
A popular function of the ANNOVAR tool is the use of the table_annovar.pl script which simplifies the workflow into one single command-line call, given that the data sources for annotation have already been downloaded. File conversion from VCF file is handled within the function call, followed by annotation and output to an Excel-compatible file. The script takes a number of parameters for annotation and outputs a VCF file with the annotations as key-value pairs inside of the INFO column of the VCF file for each genetic variant, e.g. "genomic_function=exonic".
Conversion to the ANNOVAR input file format
File conversion to the ANNOVAR input format is possible using the provided file format conversion script convert2annovar.pl. The program accepts common file formats outputted by upstream variant calling tools. Subsequent functional annotation scripts annotate_variation.pl use the ANNOVAR input file. File formats that are accepted by the convert2annovar.pl include the following:
Variant Call Format
Samtools genotype-calling pileup format
Illumina export format from GenomeStudio
SOLiD GFF genotype-calling format
Complete Genomics variant format
Generating input files based on specific variants, transcripts, or genomic regions:
When investigating candidate loci that are linked to diseases, using the above variant calling file formats as input to ANNOVAR is a standard workflow for functional annotation of genetic variants outputted from an upstream bioinformatics pipeline. ANNOVAR can also be used to in other scenarios, such as interrogating a set of genetic variants of interest based on a list of dbSNP identifiers as well as variants within specific genomic or exomic regions.
In the case of dbSNP identifiers, providing to the convert2annovar.pl script a list of identifiers (e.g. rs41534544, rs4308095, rs12345678) in a text file along with the reference genome of interest as a parameter, ANNOVAR will output an ANNOVAR input file with the genomic coordinate fields for those variants which can then be used for functional annotation.
In the case of genomic regions, one can provide a genomic range of interest (e.g. chr1:2000001-2000003) along with the reference genome of interest and ANNOVAR will generate an ANNOVAR input file of all the genetic loci spanning that range. In addition, insertion and deletion size could also be specified in which the script will select all the genetic loci where a specific size of interest insertion or deletion is found.
Last, if looking at variants within specific exonic regions, users can generate ANNOVAR input files for all possible variants in exons (including splicing variants) when theconvert2annovar.pl script is provided an RNA transcript identifier (e.g. NM_022162) based on the standard HGVS (Human Genome Variation Society) nomenclature.
Output file
The possible output files are an annotated .avinput file, CSV, TSV, or VCF. Depending on the annotation strategy taken (see Figure below), the input and output files will differ. It is possible to configure the output file types given a specific input file, by providing the program the appropriate parameter.
For example, for the table_annovar.pl program, if the input file is VCF, then the output will also be a VCF file. If the input file is of the ANNOVAR input format type, then the output will be a TSV by default, with the option to output to CSV if the -csvout parameter is specified. By choosing CSV or TSV as the output file type, a user could open the files to view the annotations in Excel or a different spreadsheet software application. This is a popular feature among users.
The output file will contain all the data from the original input file with additional columns for the desired annotations. For example, when annotating variants with characteristics such as (1) genomic function and (2) the functional role of the coding variant, the output file will contain all the columns from the input file, followed additional columns "genomic_function" (e.g. with values "exonic" or "intronic") and "coding_variant_function" (e.g. with values "synonymous SNV" or "non-synonymous SNV").
System efficiency
Benchmarked on a modern desktop computer (3 GHz Intel Xeon CPU, 8GB memory), for 4.7 million variants, ANNOVAR requires ~4 minutes to perform gene-based functional annotation, or ~15 minutes to perform stepwise "variants reduction". It is said to be practical for performing variant annotation and variant prioritization on hundreds of human genomes in a day.
ANNOVAR could be sped up by using the -thread argument which enables multi-threading so that input files could be processed in parallel.
Data resources
To use ANNOVAR for functional annotation of variants, annotation datasets can be downloaded using the annotate_variation.pl script, which saves them to local disk. Different annotation data sources are used for the three major types of annotation (gene-based, region-based, and filter-based).
These are some of the data sources for each annotation type:
Gene-based annotation
UCSC/Ensembl genes
hg38
GENCODE/CCDS
Region-based annotation
ENCODE
Custom-made databases conforming to GFF3 (Generic Feature Format version 3)
Filter-based annotation
Given the large number of data sources for filter-based annotation, here are examples of which subsets of the datasets to use for a few of the most common use cases.
For frequency of variants in whole-exome data:
ExAC: with allele frequencies for all ethnic groups
NHLBI-ESP: from 6500 exomes, use three population groupings
gnomAD allele frequency: with allele frequencies for multiple populations
For disease-specific variants:
ClinVar: with individual columns for each ClinVar field for each variant
COSMIC: somatic mutations from cancer and the frequency of occurrence in each subtype of cancer
ICGC: mutations from the International Cancer Genome Consortium
NCI-60: human tumor cell panel exome sequencing allele frequency data
Example application
Using ANNOVAR for prioritization of genetic variants to identify mutations in a rare genetic disease
ANNOVAR is one of the common annotation tools for identifying candidate and causal mutations and genes for rare genetic diseases.
Using a combination of gene-based and filter-based annotation followed by variant reduction based on the annotation values of the variants, the causal gene in a rare recessive Mendelian disease called Miller syndrome can be identified.
This will involve synthesizing a genome-wide data set of ~4.2 million single nucleotide variants (SNVs) and ~0.5 million insertions and deletions (indels). Two known causal mutations for Miller syndrome (G152R and G202A in the DHODH gene) are also included
Steps in identifying the causal variants for the disease using ANNOVAR:
Gene-based annotation to identify exonic/splicing variants of the combination of SNVs and indels (~4.7 million variants) where a total of 24,617 exonic variants are identified.
Since Miller syndrome is a rare Mendelian disease, exonic protein-changing variants are of interest only, which makes up 11,166. From that, 4860 variants are identified that falls in highly conserved genomic regions
As public databases such as dbSNP and 1000 Genomes Project archive previously reported variants which are often common, it is less likely that they will contain the Miller syndrome causal variants which are rare. Hence, variants found in those data sources are filtered out and 413 variants remain.
Then, genes are assessed for whether multiple variants exist in the same gene as compound heterozygotes and 23 genes are left.
Finally, ‘dispensable’ genes are removed, those which have high-frequency non-sense mutations (in greater than 1% of subjects in the 1000 Genomes Project) which are susceptible to sequencing and alignment errors in short-read sequencing platform. These genes are considered less likely to be causal of a rare Mendelian disease. Three genes as result are filtered out, and 20 candidate genes are leftover, including the causal gene DHODH
Limitations of ANNOVAR
Two limitations of ANNOVAR relate to detection of common diseases and larger structural variant annotations. These problems are present in all current variant annotation tools.
Most common diseases such as diabetes and Alzheimer have multiple variants throughout the genome which are common in the population. These variants are expected to have low individual deleterious scores and cause disease though the accumulation of multiple variants. However ANNOVAR has default "variant-reduction" schemes that provides a small list of rare and highly predicted deleterious variants. These default settings could be optimized so the output data would display additional variants with decreasing predicted deleterious scores. ANNOVAR is primarily used for identifying variants involved rare diseases where the causal mutation is expected to be rare and highly deleterious.
Larger structural variants (SVs) such as chromosomal inversions, translocations, and complex SVs have been shown to cause diseases such as haemophilia A and Alzheimer's. However, SVs are often difficult to annotate because it is difficult to assign specific deleterious scores to large mutated genomic regions. Currently, ANNOVAR can only annotate genes contained within deletions or duplications, or small indels of <50bp. ANNOVAR cannot infer complex SVs and translocations
Alternate variant annotation tools
There are also two other types of SNP annotation tools that are similar to ANNOVAR: SNP effect (SnpEFF) and Variant Effect Predictor (VEP). Many of the features between ANNOVAR, SnpEFF, and VEP are the same including the input and output file format, regulatory region annotations, and know variant annotations. However, the main differences are that ANNOVAR cannot annotate for loss of function predictions whereas both SnpEFF and VEP can. Also, ANNOVAR cannot annotate microRNA structural binding locations whereas VEP can. MicroRNA structural binding location predictions can be informative in revealing post-transcriptional mutations’ role in disease pathogenesis. Loss of function mutations are changes in the genome that results in the total dysfunction of the gene product. Thus, these predictions could be extremely informative in regards to disease diagnosis, especially in rare monogenic diseases.
*Table adapted from McLaren et al. (2016).
References
Bioinformatics software
Genetics software
Genomics techniques | ANNOVAR | [
"Chemistry",
"Biology"
] | 4,038 | [
"Genetics techniques",
"Genomics techniques",
"Bioinformatics software",
"Bioinformatics",
"Molecular biology techniques"
] |
60,093,357 | https://en.wikipedia.org/wiki/Cross-coupling%20partner | In cross-coupling reactions, the component reagents are called cross-coupling partners or simply coupling partners. These reagents can be further classified according to their nucleophilic vs electrophilic character:
R-X + R'-Y → R-R' + XY
Typically the electrophilic coupling partner (R-X) is an aryl halide, but triflates are also used. Nucleophilic coupling (R'-Y) partners are more diverse. In the Suzuki reaction, boronic esters and boronic acids serve as nucleophilic coupling partners. Expanding the scope of coupling partners is a focus methods development in organic synthesis.
References
Carbon-carbon bond forming reactions | Cross-coupling partner | [
"Chemistry"
] | 153 | [
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
60,094,428 | https://en.wikipedia.org/wiki/Chern%27s%20conjecture%20%28affine%20geometry%29 | Chern's conjecture for affinely flat manifolds was proposed by Shiing-Shen Chern in 1955 in the field of affine geometry. As of 2018, it remains an unsolved mathematical problem.
Chern's conjecture states that the Euler characteristic of a compact affine manifold vanishes.
Details
In case the connection ∇ is the Levi-Civita connection of a Riemannian metric, the Chern–Gauss–Bonnet formula:
implies that the Euler characteristic is zero. However, not all flat torsion-free connections on admit a compatible metric, and therefore, Chern–Weil theory cannot be used in general to write down the Euler class in terms of the curvature.
History
The conjecture is known to hold in several special cases:
when a compact affine manifold is 2-dimensional (as shown by Jean-Paul Benzécri in 1955, and later by John Milnor in 1957)
when a compact affine manifold is complete (i.e., affinely diffeomorphic to a quotient space of the affine space under a proper action of a discrete group of affine transformations, then the conjecture is true; the result is shown by Bertram Kostant and Dennis Sullivan in 1975; the result would also immediately follow from the Auslander conjecture; Kostant and Sullivan showed that a closed manifold with nonzero Euler characteristic can't admit a complete affine structure)
when a compact affine manifold is a higher-rank irreducible locally symmetric manifold (as shown by William Goldman and Morris Hirsch in 1984; they showed that a higher-rank irreducible locally symmetric manifold can never admit an affine structure)
when a compact affine manifold is locally a product of hyperbolic planes (as shown by Michelle Bucher and Tsachik Gelander in 2011)
when a compact affine manifold admits a parallel volume form (i.e., with linear holonomy in SL; it was shown by Bruno Klingler in 2015; this weaker proven case was known as Chern's conjecture for special affine manifolds; a conjecture of Markus predicts this is equivalent to being complete)
when a compact affine manifold is a complex hyperbolic surface (as shown by Hester Pieters in 2016)
Additionally obtained related results:
In 1958, Milnor proved inequalities which completely characterise those oriented rank two bundles over a surface that admit a flat connection
In 1977, Smillie proved that the condition that the connection is torsion-free matters. For each even dimension greater than 2, Smillie constructed closed manifolds with non-zero Euler characteristic that admit a flat connection on their tangent bundle
For flat pseudo-Riemannian manifolds or complex affine manifolds, this follows from the Chern–Gauss–Bonnet theorem.
Also, as proven by M.W. Hirsch and William Thurston in 1975 for incomplete affine manifolds, the conjecture holds if the holonomy group is a finite extension, a free product of amenable groups (however, their result applies to any flat bundles over manifolds).
In 1977, John Smillie produced a manifold with the tangent bundle with nonzero-torsion flat connection and nonzero Euler characteristic, thus he disproved the strong version of the conjecture asking whether the Euler characteristic of a closed flat manifold vanishes.
Later, Huyk Kim and Hyunkoo Lee proved for affine manifolds, and more generally projective manifolds developing into an affine space with amenable holonomy by a different technique using nonstandard polyhedral Gauss–Bonnet theorem developed by Ethan Bloch and Kim and Lee.
In 2002, Suhyoung Choi slightly generalized the result of Hirsch and Thurston that if the holonomy of a closed affine manifold is isomorphic to amenable groups amalgamated or HNN-extended along finite groups, then the Euler characteristic of the manifold is 0. He showed that if an even-dimensional manifold is obtained from a connected sum operation from K(π, 1)s with amenable fundamental groups, then the manifold does not admit an affine structure (generalizing a result of Smillie).
In 2008, after Smillie's simple examples of closed manifolds with flat tangent bundles (these would have affine connections with zero curvature, but possibly nonzero torsion), Bucher and Gelander obtained further results in this direction.
In 2015, Mihail Cocos proposed a possible way to solve the conjecture and proved that the Euler characteristic of a closed even-dimensional affine manifold vanishes.
In 2016, Huitao Feng () and Weiping Zhang, both of Nankai University, claimed to prove the conjecture in general case, but a serious flaw had been found, so the claim was thereafter retracted. After the correction, their current result is a formula that counts the Euler number of a flat vector bundle in terms of vertices of transversal open coverings.
Notoriously, the intrinsic Chern–Gauss–Bonnet theorem proved by Chern that the Euler characteristic of a closed affine manifold is 0 applies only to orthogonal connections, not linear ones, hence why the conjecture remains open in this generality (affine manifolds are considerably more complicated than Riemannian manifolds, where metric completeness is equivalent to geodesic completeness).
There also exists a related conjecture by Mikhail Leonidovich Gromov on the vanishing of bounded cohomology of affine manifolds.
Related conjectures
The conjecture of Chern can be considered a particular case of the following conjecture:
A closed aspherical manifold with nonzero Euler characteristic doesn't admit a flat structure
This conjecture was originally stated for general closed manifolds, not just for aspherical ones (but due to Smillie, there's a counterexample), and it itself can, in turn, also be considered a special case of even more general conjecture:
A closed aspherical manifold with nonzero simplicial volume doesn't admit a flat structure
While generalizing the Chern's conjecture on affine manifolds in these ways, it's known as the generalized Chern conjecture for manifolds that are locally a product of surfaces.
References
Further reading
J.P. Benzécri, Variétés localment plates, Princeton University Ph.D. thesis (1955)
J.P. Benzécri, Sur les variétés localement affines et projectives, Bulletin de la Société Mathématique de France, volume 88 (1960), pp. 229–332
W. Goldman and M. Hirsch, The radiance obstruction and parallel forms on affine manifolds, Transactions of the American Mathematical Society, volume 286, number 2 (1984), pp. 629–649
M. Bucher and T. Gelander, Milnor-Wood inequalities for manifolds which arelocally a product of surfaces, Advances in Mathematics, volume 228 (2011), pp. 1503–1542
H. Pieters, Hyperbolic spaces and bounded cohomology, University of Geneva Ph.D. thesis (2016)
B. Kostant and D. Sullivan, The Euler characteristic of an affine space form is zero, Bulletin of the American Mathematical Society, volume 81, number 5 (1975), pp. 937–938
J. Milnor, On the existence of a connection with curvature zero, Commentarii Mathematici Helvetici, volume 32 (1957), pp. 215–223
B. Klingler, Chern's Conjecture for special affine manifolds, pre-print 2015
B. Klingler, Chern’s conjecture for special affine manifolds, Annals of Mathematics, volume 186 (2017), pp. 1–27
M. Hirsch and W. Thurston, Foliated bundles, invariant measures and flat manifolds, Annals of Mathematics, volume 101 (1975), pp. 369–390
J. Smillie, Flat manifolds with non-zero Euler characteristic, Commentarii Mathematici Helvetici, volume 52 (1977), pp. 453–456
H. Kim and H. Lee, The Euler characteristic of a certain class of projectively flat manifolds, Topology and its Applications, volume 40 (1991), pp. 195–201
H. Kim and H. Lee, The Euler characteristic of projectively flat manifolds with amenable fundamental groups, Proceedings of the American Mathematical Society, volume 118 (1993), pp. 311–315
E. Bloch, The angle defect for arbitrary polyhedra, Beiträge zur Algebra und Geometrie, volume 39 (1998), pp.379–393
H. Kim, A polyhedral Gauss-Bonnet formula and projectively flat manifolds, GARC preprint, Seoul National University
S. Choi, The Chern Conjecture for Affinely Flat Manifolds Using Combinatorial Methods, Geometriae Dedicata, volume 97 (2003), pp. 81–92
M. Bucher and T. Gelander, Milnor-Wood inequalities for manifolds locally isometric to a product of hyperbolic planes, Comptes Rendus Mathematique, volume 346, numbers 11–12 (2008), pp. 661–666
M. Gromov, Asymptotic invariants of infinite groups. Geometric group theory. Volume 2 (1993), 8.A
Affine geometry
Differential geometry
Conjectures
Unsolved problems in geometry | Chern's conjecture (affine geometry) | [
"Mathematics"
] | 1,989 | [
"Geometry problems",
"Unsolved problems in mathematics",
"Unsolved problems in geometry",
"Conjectures",
"Mathematical problems"
] |
60,094,874 | https://en.wikipedia.org/wiki/Isopropylmagnesium%20chloride | Isopropylmagnesium chloride is an organometallic compound with the general formula (CH3)2HCMgCl. This highly flammable, colorless, and moisture sensitive material is the Grignard reagent derived from isopropyl chloride. It is commercially available, usually as a solution in tetrahydrofuran.
Synthesis and reactivity
Solutions of isopropylmagnesium chloride by treating isopropyl chloride with magnesium metal in refluxing ether:
This reagent is used to prepare other Grignard reagents by transmetalation. An illustrative reaction involves the generation of the Grignard reagent derived from bromo-3,5-bis(trifluoromethyl)benzene:
(CH3)2HCMgCl + (CF3)2C6H3Br → (CH3)2HCCl + (CF3)2C6H3MgBr
Addition of one equivalent of LiCl to isopropylmagnesium chloride gives "Turbo Grignard" solutions, named so due to the increased rate and efficiency for transmetalation reactions.
Isopropylmagnesium chloride is also used to prepare isopropyl compounds, such as chlorodiisopropylphosphine:
PCl3 + 2 (CH3)2CHMgCl → [(CH3)2CH]2PCl + 2 MgCl2
This reaction exploits the bulky nature of the isopropyl substituent.
Turbo-Grignard reagents
As initially reported by Knochel et al., lithium chloride, isopropylmagnesium chloride enhances the ability of isopropylmagnesium chloride toward transmetalation reactions. The more reactive species, a LiCl-iPrMgCl complex, is called a Turbo-Grignard reagent. These species are related to Turbo-Hauser bases, a family of magnesium amido compounds containing also LiCl. "Turbo-Grignards", as they are often called, are aggregates with the formula [i-PrMgCl·LiCl]2. These species promote formation of aryl and heteroaryl Grignard reagents by halogen-magnesium exchange:
fast, homogeneous:
The traditional method for generating the aryl Grignard reagent proceeds less predictably:
slow, heterogeneous:
Furthermore, traditional routes to Grignard reagents has limited functional group compatibility, whereas the Turbo-Grignard method tolerates other halides, some ester groups, and nitriles.
References
Organomagnesium compounds
Isopropyl compounds | Isopropylmagnesium chloride | [
"Chemistry"
] | 563 | [
"Organomagnesium compounds",
"Reagents for organic chemistry"
] |
60,095,433 | https://en.wikipedia.org/wiki/Hicks%20equation | In fluid dynamics, Hicks equation, sometimes also referred as Bragg–Hawthorne equation or Squire–Long equation, is a partial differential equation that describes the distribution of stream function for axisymmetric inviscid fluid, named after William Mitchinson Hicks, who derived it first in 1898. The equation was also re-derived by Stephen Bragg and William Hawthorne in 1950 and by Robert R. Long in 1953 and by Herbert Squire in 1956. The Hicks equation without swirl was first introduced by George Gabriel Stokes in 1842. The Grad–Shafranov equation appearing in plasma physics also takes the same form as the Hicks equation.
Representing as coordinates in the sense of cylindrical coordinate system with corresponding flow velocity components denoted by , the stream function that defines the meridional motion can be defined as
that satisfies the continuity equation for axisymmetric flows automatically. The Hicks equation is then given by
where
where is the total head, c.f. Bernoulli's Principle. and is the circulation, both of them being conserved along streamlines. Here, is the pressure and is the fluid density. The functions and are known functions, usually prescribed at one of the boundary; see the example below. If there are closed streamlines in the interior of the fluid domain, say, a recirculation region, then the functions and are typically unknown and therefore in those regions, Hicks equation is not useful; Prandtl–Batchelor theorem provides details about the closed streamline regions.
Derivation
Consider the axisymmetric flow in cylindrical coordinate system with velocity components and vorticity components . Since in axisymmetric flows, the vorticity components are
.
Continuity equation allows to define a stream function such that
(Note that the vorticity components and are related to in exactly the same way that and are related to ). Therefore the azimuthal component of vorticity becomes
The inviscid momentum equations , where is the Bernoulli constant, is the fluid pressure and is the fluid density, when written for the axisymmetric flow field, becomes
in which the second equation may also be written as , where is the material derivative. This implies that the circulation round a material curve in the form of a circle centered on -axis is constant.
If the fluid motion is steady, the fluid particle moves along a streamline, in other words, it moves on the surface given by constant. It follows then that and , where . Therefore the radial and the azimuthal component of vorticity are
.
The components of and are locally parallel. The above expressions can be substituted into either the radial or axial momentum equations (after removing the time derivative term) to solve for . For instance, substituting the above expression for into the axial momentum equation leads to
But can be expressed in terms of as shown at the beginning of this derivation. When is expressed in terms of , we get
This completes the required derivation.
Example: Fluid with uniform axial velocity and rigid body rotation in far upstream
Consider the problem where the fluid in the far stream exhibit uniform axial velocity and rotates with angular velocity . This upstream motion corresponds to
From these, we obtain
indicating that in this case, and are simple linear functions of . The Hicks equation itself becomes
which upon introducing becomes
where .
Yih equation
For an incompressible flow , but with variable density, Chia-Shun Yih derived the necessary equation. The velocity field is first transformed using Yih transformation
where is some reference density, with corresponding Stokes streamfunction defined such that
Let us include the gravitational force acting in the negative direction. The Yih equation is then given by
where
References
Fluid dynamics
Differential equations | Hicks equation | [
"Chemistry",
"Mathematics",
"Engineering"
] | 740 | [
"Chemical engineering",
"Mathematical objects",
"Differential equations",
"Equations",
"Piping",
"Fluid dynamics"
] |
40,831,187 | https://en.wikipedia.org/wiki/Dynamic%20balance | Dynamic balance is the branch of mechanics that is concerned with the effects of forces on the motion of a body or system of bodies, especially of forces that do not originate within the system itself, which is also called kinetics.
Dynamic balance is the ability of an object to balance while in motion or switching between positions.
References
Mechanics | Dynamic balance | [
"Physics",
"Engineering"
] | 67 | [
"Classical mechanics stubs",
"Mechanics",
"Classical mechanics",
"Mechanical engineering"
] |
40,831,958 | https://en.wikipedia.org/wiki/Spiling | Spiling is a traditional technique used in temperate regions of the world for the prevention of erosion to river and stream banks.
Willow spiling is currently used in the United Kingdom; live willow rods are woven between live willow uprights and the area behind is filled with soil for the willow to root into.
Kipling's poem The Land mentions it: "They spiled along the water-course with trunks of willow-trees, And planks of elms behind 'em and immortal oaken knees."
The species of willow used are riparian (associated with rivers); the posts, in diameter, are usually Salix alba or S. fragilis, and S. viminalis varieties are used for the interwoven rods. The living willow posts are driven into the bank, to a depth of or more, at intervals and the thinner rods are woven in between, the rods are best woven at an angle slightly above horizontal to ensure good survival rates. A row of stones, gabions or wooden planks held by posts can be added to the bottom of each "spile" to prevent undercutting when the willow is establishing itself. All works should be done during the dormant period, winter in temperate zones. A layer of seeded coir matting can be pegged onto the soil on top of the spiles to prevent the soil being washed out during flood events. This method is an example of soft engineering, techniques which tend to be less expensive and more sustainable than others.
See also
Fascine
References
Environmental engineering | Spiling | [
"Chemistry",
"Engineering"
] | 314 | [
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
40,832,477 | https://en.wikipedia.org/wiki/Canadian%20Aeronautics%20and%20Space%20Journal | Canadian Aeronautics and Space Journal (CASJ, French Journal aéronautique et spatial du Canada) is a triannual peer-reviewed scientific journal covering research on space and aerospace. It is the official journal of the Canadian Aerospace and Space Institute and is published by Scholastica in English. The journal was established in 1954 and the editor-in-chief is Philip Ferguson (University of Manitoba).
Abstracting and indexing
The journal is indexed and abstracted in the following databases:
See also
Canadian Journal of Remote Sensing
External links
CASJ @ Canadian Aerospace and Space Institute
Academic journals established in 1954
Multilingual journals
Canadian Science Publishing academic journals
Triannual journals
Aerospace engineering journals | Canadian Aeronautics and Space Journal | [
"Engineering"
] | 136 | [
"Aerospace engineering journals",
"Aerospace engineering"
] |
40,836,275 | https://en.wikipedia.org/wiki/Cosmic%20age%20problem | The cosmic age problem was a historical problem in astronomy concerning the age of the universe. The problem was that at various times in the 20th century, the universe was estimated to be younger than the oldest observed stars. Estimates of the universe's age came from measurements of the current expansion rate of the universe, the Hubble constant , as well as cosmological models relating to the universe's matter and energy contents (see the Friedmann equations). Issues with measuring as well as not knowing about the existence of dark energy led to spurious estimates of the age. Additionally, objects such as galaxies, stars, and planets could not have existed in the extreme temperatures and densities shortly after the Big Bang.
Since around 1997–2003, the problem is believed to have been solved by most cosmologists: modern cosmological measurements lead to a precise estimate of the age of the universe (i.e. time since the Big Bang) of 13.8 billion years, and recent age estimates for the oldest objects are either younger than this, or consistent allowing for measurement uncertainties.
Historical development
Early years
Following theoretical developments of the Friedmann equations by Alexander Friedmann and Georges Lemaître in the 1920s, and the discovery of the expanding universe by Edwin Hubble in 1929, it was immediately clear that tracing this expansion backwards in time predicts that the universe had almost zero size at a finite time in the past. This concept, initially known as the "Primeval Atom" by Lemaitre, was later elaborated into the modern Big Bang theory. If the universe had expanded at a constant rate in the past, the age of the universe now (i.e. the time since the Big Bang) is simply proportional to the inverse of the Hubble constant, often known as the Hubble time. For Big Bang models with zero cosmological constant and positive matter density, the actual age must be somewhat younger than this Hubble time; typically the age would be between 66% and 90% of the Hubble time, depending on the density of matter.
Hubble's early estimate of his constant was 550 (km/s)/Mpc, and the inverse of that is 1.8 billion years. It was believed by many geologists in the 1920s that the Earth was probably around 2 billion years old, but with large uncertainty. The possible discrepancy between the ages of the Earth and the universe was probably one motivation for the development of the Steady State theory in 1948 as an alternative to the Big Bang; in the (now obsolete) steady state theory, the universe is infinitely old and on average unchanging with time. The steady state theory postulated spontaneous creation of matter to keep the average density constant as the universe expands, and therefore most galaxies still have an age less than 1/H0. However, if H0 had been 550 (km/s)/Mpc, our Milky Way galaxy would be exceptionally large compared to most other galaxies, so it could well be much older than an average galaxy, therefore eliminating the age problem.
1950–1970
In the 1950s, two substantial errors were discovered in Hubble's extragalactic distance scale: first in 1952, Walter Baade discovered there were two classes of Cepheid variable star. Hubble's sample comprised different classes nearby and in other galaxies, and correcting this error made all other galaxies twice as distant as Hubble's values, thus doubling the Hubble time. A second error was discovered by Allan Sandage and coworkers: for galaxies beyond the Local Group, Cepheids were too faint to observe with Hubble's instruments, so Hubble used the brightest stars as distance indicators. Many of Hubble's "brightest stars" were actually HII regions or clusters containing many stars, which caused another underestimation of distances for these more distant galaxies. Thus, in 1958 Sandage published the first reasonably accurate measurement of the Hubble constant, at 75 (km/s)/Mpc, which is close to modern estimates of 68–74 (km/s)/Mpc.
The age of the Earth (actually the Solar System) was first accurately measured around 1955 by Clair Patterson at 4.55 billion years, essentially identical to the modern value. For H0 ~ 75 (km/s)/Mpc, the inverse of H0 is 13.0 billion years; so after 1958 the Big Bang model age was comfortably older than the Earth.
However, in the 1960s and onwards, new developments in the theory of stellar evolution enabled age estimates for large star clusters called globular clusters: these generally gave age estimates of around 15 billion years, with substantial scatter. Further revisions of the Hubble constant by Sandage and Gustav Tammann in the 1970s gave values around 50–60 (km/s)/Mpc, and an inverse of 16-20 billion years, consistent with globular cluster ages.
1975–1990
However, in the late 1970s to early 1990s, the age problem re-appeared: new estimates of the Hubble constant gave higher values, with Gerard de Vaucouleurs estimating values 90–100 (km/s)/Mpc, while Marc Aaronson and co-workers gave values around 80-90 (km/s)/Mpc. Sandage and Tammann continued to argue for values 50–60, leading to a period of controversy sometimes called the "Hubble wars". The higher values for H0 appeared to predict a universe younger than the globular cluster ages, and gave rise to some speculations during the 1980s that the Big Bang model was seriously incorrect.
Late 1990s: probable solution
The age problem was eventually thought to be resolved by several developments between 1995 and 2003: firstly, a large program with the Hubble Space Telescope measured the Hubble constant at 72 (km/s)/Mpc with 10 percent uncertainty. Secondly, measurements of parallax by the Hipparcos spacecraft in 1995 revised globular cluster distances upwards by 5-10 percent; this made their stars brighter than previously estimated and therefore younger, shifting their age estimates down to around 12-13 billion years. Finally, from 1998 to 2003 a number of new cosmological observations including supernovae, cosmic microwave background observations and large galaxy redshift surveys led to the acceptance of dark energy and the establishment of the Lambda-CDM model as the standard model of cosmology. The presence of dark energy implies that the universe was expanding more slowly at around half its present age than today, which makes the universe older for a given value of the Hubble constant. The combination of the three results above essentially removed the discrepancy between estimated globular cluster ages and the age of the universe.
More recent measurements from WMAP and the Planck spacecraft lead to an estimate of the age of the universe of 13.80 billion years with only 0.3 percent uncertainty (based on the standard Lambda-CDM model), and modern age measurements for globular clusters and other objects are currently smaller than this value (within the measurement uncertainties). A substantial majority of cosmologists therefore believe the age problem is now resolved.
New research from teams, including one led by Nobel laureate Adam Riess of the Space Telescope Science Institute in Baltimore, has found the universe to be between 12.5 and 13 billion years old, disagreeing with the Planck findings. Whether this stems merely from errors in data gathering, or is related to the as yet unexplained aspects of physics, such as Dark Energy or Dark Matter, has yet to be confirmed.
Dynamical modeling of the universe
In this section, we wish to explore the effect of the dynamical modeling of the universe on the estimate of the universe's age. We will assume the modern observed Hubble value km/s/Mpc so that the discussion below focuses on the effect of the dynamical modeling and less on the effect of the historical accuracy of the Hubble constant.
The 1932 Einstein-de Sitter model of the universe assumes that the universe is filled with only matter and has vanishing curvature. This model received some popularity in the 1980s and offers an explicit solution for the scale factor (see, e.g., D. Baumann 2022)
where is the universe's current age. This then implies that the age of the universe is directly related to the Hubble constant
Substituting in the Hubble constant, the universe has an age of billion years, in disagreement with, e.g., the age of the oldest stars.
If one then allows for dark energy in the form of a cosmological constant in addition to matter, this two-component model predicts the following relationship between age and the Hubble constant
Plugging in observed values of the density parameters results in an age of the universe billion years, now consistent with stellar age observations.
References
External links
http://map.gsfc.nasa.gov/universe/uni_age.html
Obsolete scientific theories
Physical cosmology | Cosmic age problem | [
"Physics",
"Astronomy"
] | 1,844 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
34,032,918 | https://en.wikipedia.org/wiki/Reduced%20viscosity | In fluid dynamics, the reduced viscosity of a polymer is the ratio of the relative viscosity increment () to the mass concentration of the species of interest (c). It has units of volume per unit mass.
The reduced viscosity is given by:
where is the relative viscosity increment given by (Where is the viscosity of the solvent.)
See also
Relative viscosity
Viscosity
Intrinsic viscosity
Huggins equation
References
Viscosity | Reduced viscosity | [
"Physics",
"Chemistry"
] | 103 | [
"Physical phenomena",
"Physical quantities",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
34,036,115 | https://en.wikipedia.org/wiki/ANDi | ANDi is the first genetically modified rhesus monkey, who was born at Oregon Health Sciences University (OHSU) on October 1, 2001. OHSU named the monkey ANDi because it stands for iDNA spelled backward.
Birth circumstances
ANDi was born with an extra glowing gene called green fluorescent protein (GFP). This GFP gene, which is naturally occurring in jellyfish, was taken from a jellyfish and genetically added to ANDi's DNA sequence through his chromosomes. OHSU used rhesus monkeys because they share 95% of the same genes as humans.
Genetic modification method
During the method in which ANDi was created, 224 eggs were injected with the protein and only 166 or 75% were successfully fertilized. 126 or 76% of these fertilized eggs developed to the four-cell-stage embryos. 40 of the fertilized embryos were implanted in 20 surrogate rhesus mothers, each carrying two embryos. 5 of the surrogates became pregnant. From these five surrogates, three live births proceeded. In these three monkey births, only one infant, ANDi, carried the transgene. Research team leader Gerald Schatten said the technique that created ANDi would become a vital tool for scientists investigating therapies for human diseases.
Implications
By being able to genetically modify a monkey, a new breakthrough in technology was formed. ANDi was created in the hope of finding a cure for complicated human diseases such as cancer. Since ANDi was born, scientists want to introduce more significant DNA changes. Scientists are looking to make genetic modifications such as those that would make primates closely mimic human diseases like breast cancer or HIV. Scientists also want to try to study and cure other diseases through trans genetics such as Alzheimer's, AIDS, and diabetes.
Although ANDi carries the gene, it does not appear to be functional; ANDi does not actually glow.
See also
List of individual monkeys
Tetra (monkey)
References
Individual monkeys
Genetically modified organisms
2001 animal births
Individual primates in the United States | ANDi | [
"Engineering",
"Biology"
] | 419 | [
"Genetic engineering",
"Genetically modified organisms"
] |
34,040,543 | https://en.wikipedia.org/wiki/Shunt%20impedance | In accelerator physics, shunt impedance is a measure of the strength with which an eigenmode of a resonant radio frequency structure (e.g., in a microwave cavity) interacts with charged particles on a given straight line, typically along the axis of rotational symmetry. If not specified further, the term is likely to refer to longitudinal effective shunt impedance.
Longitudinal shunt impedance
To produce longitudinal Coulomb forces which add up to the (longitudinal) acceleration voltage , an eigenmode of the resonator has to be excited, leading to power dissipation . The definition of the longitudinal effective shunt impedance, , then reads:
with the longitudinal effective acceleration voltage .
The time-independent shunt impedance, , with the time-independent acceleration voltage is defined:
One can use the quality factor to substitute with an equivalent expression:
where W is the maximum energy stored. Since the quality factor is the only quantity in the right equation term that depends on wall properties, the quantity is often used to design cavities, omitting material properties at first (see also cavity geometry factor).
Transverse shunt impedance
When a particle is deflected in transverse direction, the definition of the shunt impedance can be used with substitution of the (longitudinal) acceleration voltage by the transverse effective acceleration voltage, taking into account transversal Coulomb and Lorentz forces.
This does not necessarily imply a change in particle energy since a particle can also be deflected by magnetic fields (see Panofsky-Wenzel theorem).
Polarization angle
Because the transverse deflection can be described with polar coordinates, one may define a deflection or polarization angle using the transverse acceleration voltage components. Polar coordinates are used because it is possible to add up voltage components like vectors, but not shunt impedances.
References
Accelerator physics | Shunt impedance | [
"Physics"
] | 385 | [
"Applied and interdisciplinary physics",
"Accelerator physics",
"Experimental physics"
] |
45,544,166 | https://en.wikipedia.org/wiki/Protide | The ProTide technology is a prodrug approach used in molecular biology and drug design. It is designed to deliver nucleotide analogues (as monophosphate) into the cell (ProTide: PROdrug + nucleoTIDE). This technology was invented by Professor Chris McGuigan from the School of Pharmacy and Pharmaceutical Sciences at Cardiff University in the early 1990s. ProTides form a critical part of antiviral drugs such as sofosbuvir, tenofovir alafenamide, and remdesivir.
Development
The first demonstration of the ProTide approach was made in 1992, when the efficiency of aryloxy phosphates and phosphoramidates was noted. In particular, diaryl phosphates were prepared from zidovudine (AZT) using simple phosphorochloridate chemistry. For the first time, the anti-HIV activity of these phosphate derivatives of AZT exceeded that of the parent nucleoside in some cases. Moreover, while AZT was almost inactive (EC50 100μM) in the JM cell line, the substituted diaryl phosphate was 10 times more active (EC50 10μM). At the time, JM was considered AZT-insensitive due to poor phosphorylation. It later emerged that an AZT-efflux pump was the source of this poor AZT sensitivity. However, the conclusion remains valid that the diaryl phosphate was more able to retain activity in the JM cell line and that this may imply a (small) degree of intracellular phosphate delivery. The electron-withdrawing power of the p-nitro groups and putative enhancements in aryl leaving group ability were suggested as the major driving force of this SAR.
Subsequently, a series of aryloxy phosphoramidates of AZT were prepared with various p-aryl substituents and several amino acids. These compounds were studied exclusively in the AZT-resistant JM cell line to explore potential (implied) AZT-monophosphate release, with the alanine phosphoramidate proving to be exceptionally effective. In all HIV-1 infected JM cultures, AZT was inhibited at a concentration of 100μM, while the phenyl methoxy alaninyl phosphoramidate was active at 0.8μM. This was taken as the first evidence of a successful nucleotide delivery. It was also noted that in other series, there was a marked preference for alanine over leucine (10-fold) and glycine (>100-fold). Furthermore, although electron-withdrawing aryl substitution had proven highly effective in diaryl systems, it was detrimental in this context. Para fluoro substitution had a slight adventitious effect, but not significantly so, while para-nitro substitution led to a 100-fold loss of activity. In a subsequent study, the range of aryl substituents was expanded, and compounds were tested in both TK+ (thymidine kinase competent) and TK- (thymidine kinase deficient) cell lines. None of the phosphoramidates retained the high (2–4 nM) potency of AZT in TK-competent cell lines (CEM and MT-4) against either HIV-1 or HIV-2. However, while AZT lost all of its activity in the TK- deficient cell line CEM/TK-, most of the phosphoramidates retained antiviral activity, thus being ca >10–35-fold more active than AZT in this assay. Again, alanine emerged as an important component, with the glycine analogue being inactive in HIV-infected CEM/TK- all cultures. In this assay, leucine and phenylalanine were as effective as alanine, although they were less so in CEM/TK+ assays. Thus, the parent phenyl methoxy alanyl phosphoramidate emerged as an important lead compound.
Stavudine (d4T) was an early application of the ProTide approach. This was a rational choice based on the known kinetics of phosphorylation of d4T. Thus, while the second phosphorylation (AZT-monophosphate to AZT-diphosphate) but not the first phosphorylation (AZT to AZT-monophosphate) is regarded as rate limiting for AZT activation to the triphosphate, the first step (d4T to d4T monophosphate) is thought in general to be the slow step for d4T. Thus, an intracellular (mono)nucleotide delivery should have a maximal impact for d4T and similar nucleosides. In the first instance (halo)alkyloxy phosphoramidates of d4T were prepared and found to retain activity in d4T-resistant JM cells. The activity was dependent on the haloalkyl group; the parent propyl system was poorly active. Subsequent studies in HIV-infected CEM/TK- cell cultures revealed the aryloxy phosphoramidates of d4T to be highly effective and, notably, to retain their full activity in CEM/TK- cells. In this study the benzyl ester emerged as slightly more potent than the parent methyl compound, being almost 10-times more active than d4T in CEM/TK+ assays and thus ca 300-500 fold more active than d4T, in CEM/TK- assays.
Current applications
The ProTide pro-drugs are useful for delivering phosphonate containing drugs to cell types with high expression of CTSA and CES1, such as immune cells. Tenofovir alafenamide is a successful example of this iteration. ProTides are also useful for nucleoside analogues that do not get phosphorylated efficiently by endogenous nucleoside kinases. For the nucleoside GS-334750, the parent of sofosbuvir, phosphorylation by nucleoside kinases is effectively nilled, and the only way to deliver active nucleotide is through ProTide. A major limitation of ProTides is that they require an expression of esterases like CTSA and CES1, which is very high in some cell types like hepatocytes and plays to an advantage for the treatment of Hepatitis C of Sofosbuvir.
Extensive studies followed on these promising d4T derivatives and the ProTide technology was successfully applied to a wide range of nucleoside analogues. In particular, the ProTide approach has been used on several clinically evaluated anti-HCV nucleoside analogues, including the 2013 FDA approved compound sofosbuvir, and the 2016 FDA approved compound, Tenofovir alafenamide. Remdesivir also uses the ProTide prodrug technology (self-immolation is the key principle of ProTide nucleotide prodrugs). Because GS-441524 nucleoside can be phosphorylated and activated, some researchers have argued that the Protide application is an unnecessary complication in Remdesivir's design and that the parent nucleoside would be a cheaper and more effective COVID-19 drug.
ProTides have been tested to deliver key phosphorylated metabolites in inborn errors of metabolism, such as phosphopantothenate for PANK2 deficiency, however these were a clinical failure.
References
Drug discovery
Prodrugs | Protide | [
"Chemistry",
"Biology"
] | 1,634 | [
"Life sciences industry",
"Drug discovery",
"Prodrugs",
"Medicinal chemistry",
"Chemicals in medicine"
] |
39,497,996 | https://en.wikipedia.org/wiki/Ionic%20polymerization | In polymer chemistry, ionic polymerization is a chain-growth polymerization in which active centers are ions or ion pairs.
It can be considered as an alternative to radical polymerization, and may refer to anionic polymerization or cationic polymerization.
As with radical polymerization, reactions are initiated by a reactive compound. For cationic polymerization, titanium-, boron-, aluminum-, and tin-halide complexes with water, alcohols, or oxonium salts are useful as initiators, as well as strong acids and salts such as . Meanwhile, group 1 metals such as lithium, sodium, and potassium, and their organic compounds (e.g. sodium naphthalene) serve as effective anionic initiators. In both anionic and cationic polymerization, each charged chain end (negative and positive, respectively) is matched by a counterion of opposite charge that originates from the initiator. Because of the charge stability necessary in ionic polymerization, monomers which may be polymerized by this method are few compared to those available for free radical polymerization. Stable polymerizing cations are only possible using monomers with electron-releasing groups, and stable anions with monomers with electron-withdrawing groups as substituents.
While radical polymerization rate is governed nearly exclusively by monomer chemistry and radical stability, successful ionic polymerization is as strongly related to reaction conditions. Poor monomer purity quickly leads to early termination, and solvent polarity has a great effect on reaction rate. Loosely-coordinated and solvated ion pairs promote more reactive, fast-polymerizing chains, unencumbered by their counterions. Unfortunately, molecules that are polar enough to support these solvated ion pairs often interrupt the polymerization in other ways, such as by destroying propagating species or coordinating with initiator ions, and so they are seldom utilized. Typical solvents for ionic polymerization include non-polar molecules such as pentane, or moderately polar molecules such as chloroform.
History
The potential utility of ionic polymerization was first recorded by Michael Szwarc after a conversation with Samuel Weissman. He and a team, composed of Moshe Levy and Ralph Milkovich, attempted to recreate an experiment performed by Weissman to study the electron affinity of styrene. By adding styrene monomer to a solution of sodium naphthalenide and Tetrahydrofuran, the "olive-green" solution became a "cherry-red" and appeared to continue to react with new additions of styrene even minutes after the last. This observation, coupled with the determination that the product was polystyrene, indicated that a living, anionic polymerization had been initiated by the addition of electrons.
Applications
Because of the polarity of the active group on each polymerizing radical, termination by chain combination is not seen in ionic polymerization. Furthermore, because charge propagation can only occur by covalent bond formation with the compatible monomer species, termination by chain transfer or disproportionation is impossible. This means that all polymerizing ions, unlike in radical polymerization, grow and maintain their chain lengths throughout the reaction duration (so-called "living" polymer chains), until termination by the addition of a terminating molecule such as water. This leads to virtually monodisperse polymer products, which have many applications in material analysis and product design. Furthermore, because the ions do not self-terminate, block copolymers may be formed by the addition of a new monomer species.
A few important uses of anionic polymerization include the following:
Calibration standards for gel permeation chromatography
Microphase separating block copolymers
Thermoplastic elastomeric materials
References
Polymerization reactions | Ionic polymerization | [
"Chemistry",
"Materials_science"
] | 784 | [
"Polymerization reactions",
"Polymer chemistry"
] |
39,498,136 | https://en.wikipedia.org/wiki/Trifluoromethylsulfur%20pentafluoride | Trifluoromethylsulfur pentafluoride, CF3SF5, is a rarely used industrial greenhouse gas. It was first identified in the atmosphere in 2000. Trifluoromethylsulfur pentafluoride is considered to be one of the several "super-greenhouse gases".
Properties
The chemistry of this compound is similar to that of sulfur hexafluoride (SF6).
As a greenhouse gas
On a per molecule basis, it is considered to be the most potent greenhouse gas present in Earth's atmosphere, having a global warming potential of about 18,000 times that of carbon dioxide. The chemical is predicted to have a lifetime of 800 years in the atmosphere. However, the current concentration of trifluoromethylsulfur pentafluoride remains at a level that is unlikely to measurably contribute to global warming. The presence of the gas in the atmosphere is attributed to anthropogenic sources, possibly a by-product of the manufacture of fluorochemicals, originating from reactions of SF6 with fluoropolymers used in electronic devices and microchips, or the formation can be associated with high voltage equipment created from SF6 (a breakdown product of high voltage equipment) reacting with CF3 to form the CF3SF5 molecule.
References
Greenhouse gases
Trifluoromethyl compounds
Sulfur fluorides
Hypervalent molecules | Trifluoromethylsulfur pentafluoride | [
"Physics",
"Chemistry",
"Environmental_science"
] | 288 | [
"Molecules",
"Environmental chemistry",
"Hypervalent molecules",
"Greenhouse gases",
"Matter"
] |
39,499,535 | https://en.wikipedia.org/wiki/Derivations%20of%20the%20Lorentz%20transformations | There are many ways to derive the Lorentz transformations using a variety of physical principles, ranging from Maxwell's equations to Einstein's postulates of special relativity, and mathematical tools, spanning from elementary algebra and hyperbolic functions, to linear algebra and group theory.
This article provides a few of the easier ones to follow in the context of special relativity, for the simplest case of a Lorentz boost in standard configuration, i.e. two inertial frames moving relative to each other at constant (uniform) relative velocity less than the speed of light, and using Cartesian coordinates so that the x and x′ axes are collinear.
Lorentz transformation
In the fundamental branches of modern physics, namely general relativity and its widely applicable subset special relativity, as well as relativistic quantum mechanics and relativistic quantum field theory, the Lorentz transformation is the transformation rule under which all four-vectors and tensors containing physical quantities transform from one frame of reference to another.
The prime examples of such four-vectors are the four-position and four-momentum of a particle, and for fields the electromagnetic tensor and stress–energy tensor. The fact that these objects transform according to the Lorentz transformation is what mathematically defines them as vectors and tensors; see tensor for a definition.
Given the components of the four-vectors or tensors in some frame, the "transformation rule" allows one to determine the altered components of the same four-vectors or tensors in another frame, which could be boosted or accelerated, relative to the original frame. A "boost" should not be conflated with spatial translation, rather it's characterized by the relative velocity between frames. The transformation rule itself depends on the relative motion of the frames. In the simplest case of two inertial frames the relative velocity between enters the transformation rule. For rotating reference frames or general non-inertial reference frames, more parameters are needed, including the relative velocity (magnitude and direction), the rotation axis and angle turned through.
Historical background
The usual treatment (e.g., Albert Einstein's original work) is based on the invariance of the speed of light. However, this is not necessarily the starting point: indeed (as is described, for example, in the second volume of the Course of Theoretical Physics by Landau and Lifshitz), what is really at stake is the locality of interactions: one supposes that the influence that one particle, say, exerts on another can not be transmitted instantaneously. Hence, there exists a theoretical maximal speed of information transmission which must be invariant, and it turns out that this speed coincides with the speed of light in vacuum. Newton had himself called the idea of action at a distance philosophically "absurd", and held that gravity had to be transmitted by some agent according to certain laws.
Michelson and Morley in 1887 designed an experiment, employing an interferometer and a half-silvered mirror, that was accurate enough to detect aether flow. The mirror system reflected the light back into the interferometer. If there were an aether drift, it would produce a phase shift and a change in the interference that would be detected. However, no phase shift was ever found. The negative outcome of the Michelson–Morley experiment left the concept of aether (or its drift) undermined. There was consequent perplexity as to why light evidently behaves like a wave, without any detectable medium through which wave activity might propagate.
In a 1964 paper, Erik Christopher Zeeman showed that the causality-preserving property, a condition that is weaker in a mathematical sense than the invariance of the speed of light, is enough to assure that the coordinate transformations are the Lorentz transformations. Norman Goldstein's paper shows a similar result using inertiality (the preservation of time-like lines) rather than causality.
Physical principles
Einstein based his theory of special relativity on two fundamental postulates. First, all physical laws are the same for all inertial frames of reference, regardless of their relative state of motion; and second, the speed of light in free space is the same in all inertial frames of reference, again, regardless of the relative velocity of each reference frame. The Lorentz transformation is fundamentally a direct consequence of this second postulate.
The second postulate
Assume the second postulate of special relativity stating the constancy of the speed of light, independent of reference frame, and consider a collection of reference systems moving with respect to each other with constant velocity, i.e. inertial systems, each endowed with its own set of Cartesian coordinates labeling the points, i.e. events of spacetime. To express the invariance of the speed of light in mathematical form, fix two events in spacetime, to be recorded in each reference frame. Let the first event be the emission of a light signal, and the second event be it being absorbed.
Pick any reference frame in the collection. In its coordinates, the first event will be assigned coordinates , and the second . The spatial distance between emission and absorption is , but this is also the distance traveled by the signal. One may therefore set up the equation
Every other coordinate system will record, in its own coordinates, the same equation. This is the immediate mathematical consequence of the invariance of the speed of light. The quantity on the left is called the spacetime interval. The interval is, for events separated by light signals, the same (zero) in all reference frames, and is therefore called invariant.
Invariance of interval
For the Lorentz transformation to have the physical significance realized by nature, it is crucial that the interval is an invariant measure for any two events, not just for those separated by light signals. To establish this, one considers an infinitesimal interval,
as recorded in a system . Let be another system assigning the interval to the same two infinitesimally separated events. Since if , then the interval will also be zero in any other system (second postulate), and since and are infinitesimals of the same order, they must be proportional to each other,
On what may depend? It may not depend on the positions of the two events in spacetime, because that would violate the postulated homogeneity of spacetime. It might depend on the relative velocity between and , but only on the speed, not on the direction, because the latter would violate the isotropy of space.
Now bring in systems and ,
From these it follows,
Now, one observes that on the right-hand side that depend on both and ; as well as on the angle between the vectors and . However, one also observes that the left-hand side does not depend on this angle. Thus, the only way for the equation to hold true is if the function is a constant. Further, by the same equation this constant is unity. Thus,
for all systems . Since this holds for all infinitesimal intervals, it holds for all intervals.
Most, if not all, derivations of the Lorentz transformations take this for granted. In those derivations, they use the constancy of the speed of light (invariance of light-like separated events) only. This result ensures that the Lorentz transformation is the correct transformation.
Rigorous Statement and Proof of Proportionality of ds2 and ds′2
Theorem:
Let be integers, and a vector space over of dimension . Let be an indefinite-inner product on with signature type . Suppose is a symmetric bilinear form on such that the null set of the associated quadratic form of is contained in that of (i.e. suppose that for every , if then ). Then, there exists a constant such that . Furthermore, if we assume and that also has signature type , then we have .
Remarks.
In the section above, the term "infinitesimal" in relation to is actually referring (pointwise) to a quadratic form over a four-dimensional real vector space (namely the tangent space at a point of the spacetime manifold). The argument above is copied almost verbatim from Landau and Lifshitz, where the proportionality of and is merely stated as an 'obvious' fact even though the statement is not formulated in a mathematically precise fashion nor proven. This is a non-obvious mathematical fact which needs to be justified; fortunately the proof is relatively simple and it amounts to basic algebraic observations and manipulations.
The above assumptions on means the following: is a bilinear form which is symmetric and non-degenerate, such that there exists an ordered basis of for which An equivalent way of saying this is that has the matrix representation relative to the ordered basis .
If we consider the special case where then we're dealing with the situation of Lorentzian signature in 4-dimensions, which is what relativity is based on (or one could adopt the opposite convention with an overall minus sign; but this clearly doesn't affect the truth of the theorem). Also, in this case, if we assume and both have quadratics forms with the same null-set (in physics terminology, we say that and give rise to the same light cone) then the theorem tells us that there is a constant such that . Modulo some differences in notation, this is precisely what was used in the section above.
Proof of Theorem.
Fix a basis of relative to which has the matrix representation . The point is that the vector space can be decomposed
into subspaces (the span of the first basis vectors) and (then span of the other basis vectors) such that each vector in can be written uniquely as for and ; moreover , and . So (by bilinearity)
Since the first summand on the right in non-positive and the second in non-negative, for any
and , we can find a scalar such that .
From now on, always consider
and . By bilinearity
If , then also and the same is true for (since the null-set of is contained in that of ). In that case, subtracting the two expression above (and dividing by 4) yields
As above, for each
and , there is a scalar such that , so , which by bilinearity means .
Now consider nonzero such that . We can find such that . By the expressions above,
Analogically, for , one can show that if , then also
. So it holds for all vectors in .
For , if , for some , we can (scaling one of the if necessary) assume , which by the above means that . So .
Finally, if we assume that both have signature types and then (we can't have because that would mean , which is impossible since having signature type means it is a non-zero bilinear form. Also, if , then it means has positive diagonal entries and negative diagonal entries; i.e. it is of signature , since we assumed , so this is also not possible. This leaves us with as the only option). This completes the proof of the theorem.
Standard configuration
The invariant interval can be seen as a non-positive definite distance function on spacetime. The set of transformations sought must leave this distance invariant. Due to the reference frame's coordinate system's cartesian nature, one concludes that, as in the Euclidean case, the possible transformations are made up of translations and rotations, where a slightly broader meaning should be allowed for the term rotation.
The interval is quite trivially invariant under translation. For rotations, there are four coordinates. Hence there are six planes of rotation. Three of those are rotations in spatial planes. The interval is invariant under ordinary rotations too.
It remains to find a "rotation" in the three remaining coordinate planes that leaves the interval invariant. Equivalently, to find a way to assign coordinates so that they coincide with the coordinates corresponding to a moving frame.
The general problem is to find a transformation such that
To solve the general problem, one may use the knowledge about invariance of the interval of translations and ordinary rotations to assume, without loss of generality, that the frames and are aligned in such a way that their coordinate axes all meet at and that the and axes are permanently aligned and system has speed along the positive . Call this the standard configuration. It reduces the general problem to finding a transformation such that
The standard configuration is used in most examples below. A linear solution of the simpler problem
solves the more general problem since coordinate differences then transform the same way. Linearity is often assumed or argued somehow in the literature when this simpler problem is considered. If the solution to the simpler problem is not linear, then it doesn't solve the original problem because of the cross terms appearing when expanding the squares.
The solutions
As mentioned, the general problem is solved by translations in spacetime. These do not appear as a solution to the simpler problem posed, while the boosts do (and sometimes rotations depending on angle of attack). Even more solutions exist if one only insist on invariance of the interval for lightlike separated events. These are nonlinear conformal ("angle preserving") transformations. One has
Some equations of physics are conformal invariant, e.g. the Maxwell's equations in source-free space, but not all. The relevance of the conformal transformations in spacetime is not known at present, but the conformal group in two dimensions is highly relevant in conformal field theory and statistical mechanics. It is thus the Poincaré group that is singled out by the postulates of special relativity. It is the presence of Lorentz boosts (for which velocity addition is different from mere vector addition that would allow for speeds greater than the speed of light) as opposed to ordinary boosts that separates it from the Galilean group of Galilean relativity. Spatial rotations, spatial and temporal inversions and translations are present in both groups and have the same consequences in both theories (conservation laws of momentum, energy, and angular momentum). Not all accepted theories respect symmetry under the inversions.
Using the geometry of spacetime
Landau & Lifshitz solution
These three hyperbolic function formulae (H1–H3) are referenced below:
The problem posed in standard configuration for a boost in the , where the primed coordinates refer to the moving system is solved by finding a linear solution to the simpler problem
The most general solution is, as can be verified by direct substitution using (H1),
To find the role of in the physical setting, record the progression of the origin of , i.e. . The equations become (using first ),
Now divide:
where was used in the first step, (H2) and (H3) in the second, which, when plugged back in (), gives
or, with the usual abbreviations,
This calculation is repeated with more detail in section hyperbolic rotation.
Hyperbolic rotation
The Lorentz transformations can also be derived by simple application of the special relativity postulates and using hyperbolic identities.
Relativity postulates
Start from the equations of the spherical wave front of a light pulse, centred at the origin:
which take the same form in both frames because of the special relativity postulates. Next, consider relative motion along the x-axes of each frame, in standard configuration above, so that y = y′, z = z′, which simplifies to
Linearity
Now assume that the transformations take the linear form:
where A, B, C, D are to be found. If they were non-linear, they would not take the same form for all observers, since fictitious forces (hence accelerations) would occur in one frame even if the velocity was constant in another, which is inconsistent with inertial frame transformations.
Substituting into the previous result:
and comparing coefficients of , , :
Hyperbolic rotation
The equations suggest the hyperbolic identity
Introducing the rapidity parameter as a hyperbolic angle allows the consistent identifications
where the signs after the square roots are chosen so that and increase if and increase, respectively. The hyperbolic transformations have been solved for:
If the signs were chosen differently the position and time coordinates would need to be replaced by and/or so that and increase not decrease.
To find how relates to the relative velocity, from the standard configuration the origin of the primed frame is measured in the unprimed frame to be (or the equivalent and opposite way round; the origin of the unprimed frame is and in the primed frame it is at ):
and hyperbolic identities leads to the relations between , , and ,
From physical principles
The problem is usually restricted to two dimensions by using a velocity along the x axis such that the y and z coordinates do not intervene, as described in standard configuration above.
Time dilation and length contraction
The transformation equations can be derived from time dilation and length contraction, which in turn can be derived from first principles. With and representing the spatial origins of the frames and , and some event , the relation between the position vectors (which here reduce to oriented segments , and ) in both frames is given by:
Using coordinates in and in for event M, in frame the segments are , and (since is as measured in ):
Likewise, in frame , the segments are (since is as measured in ), and :
By rearranging the first equation, we get
which is the space part of the Lorentz transformation. The second relation gives
which is the inverse of the space part. Eliminating between the two space part equations gives
that, if , simplifies to:
which is the time part of the transformation, the inverse of which is found by a similar elimination of :
Spherical wavefronts of light
The following is similar to that of Einstein.
As in the Galilean transformation, the Lorentz transformation is linear since the relative velocity of the reference frames is constant as a vector; otherwise, inertial forces would appear. They are called inertial or Galilean reference frames. According to relativity no Galilean reference frame is privileged. Another condition is that the speed of light must be independent of the reference frame, in practice of the velocity of the light source.
Consider two inertial frames of reference O and O′, assuming O to be at rest while O′ is moving with a velocity v with respect to O in the positive x-direction. The origins of O and O′ initially coincide with each other. A light signal is emitted from the common origin and travels as a spherical wave front. Consider a point P on a spherical wavefront at a distance r and r′ from the origins of O and O′ respectively. According to the second postulate of the special theory of relativity the speed of light is the same in both frames, so for the point P:
The equation of a sphere in frame O is given by
For the spherical wavefront that becomes
Similarly, the equation of a sphere in frame O′ is given by
so the spherical wavefront satisfies
The origin O′ is moving along x-axis. Therefore,
must vary linearly with and . Therefore, the transformation has the form
For the origin of O′ and are given by
so, for all ,
and thus
This simplifies the transformation to
where is to be determined. At this point is not necessarily a constant, but is required to reduce to 1 for .
The inverse transformation is the same except that the sign of is reversed:
The above two equations give the relation between and as:
or
Replacing , , and in the spherical wavefront equation in the O′ frame,
with their expressions in terms of x, y, z and t produces:
and therefore,
which implies,
or
Comparing the coefficient of in the above equation with the coefficient of in the spherical wavefront equation for frame O produces:
Equivalent expressions for γ can be obtained by matching the x2 coefficients or setting the coefficient to zero. Rearranging:
or, choosing the positive root to ensure that the x and x' axes and the time axes point in the same direction,
which is called the Lorentz factor. This produces the Lorentz transformation from the above expression. It is given by
The Lorentz transformation is not the only transformation leaving invariant the shape of spherical waves, as there is a wider set of spherical wave transformations in the context of conformal geometry, leaving invariant the expression . However, scale changing conformal transformations cannot be used to symmetrically describe all laws of nature including mechanics, whereas the Lorentz transformations (the only one implying ) represent a symmetry of all laws of nature and reduce to Galilean transformations at .
Galilean and Einstein's relativity
Galilean reference frames
In classical kinematics, the total displacement x in the R frame is the sum of the relative displacement x′ in frame R′ and of the distance between the two origins x − x′. If v is the relative velocity of R′ relative to R, the transformation is: , or . This relationship is linear for a constant , that is when R and R′ are Galilean frames of reference.
In Einstein's relativity, the main difference from Galilean relativity is that space and time coordinates are intertwined, and in different inertial frames t ≠ t′.
Since space is assumed to be homogeneous, the transformation must be linear. The most general linear relationship is obtained with four constant coefficients, A, B, γ, and b:
The linear transformation becomes the Galilean transformation when γ = B = 1, b = −v and A = 0.
An object at rest in the R′ frame at position x′ = 0 moves with constant velocity v in the R frame. Hence the transformation must yield x′ = 0 if x = vt. Therefore, b = −γv and the first equation is written as
Using the principle of relativity
According to the principle of relativity, there is no privileged Galilean frame of reference: therefore the inverse transformation for the position from frame R′ to frame R should have the same form as the original but with the velocity in the opposite direction, i.o.w. replacing v with -v:
and thus
Determining the constants of the first equation
Since the speed of light is the same in all frames of reference, for the case of a light signal, the transformation must guarantee that t = x/c when t′ = x′/c.
Substituting for t and t′ in the preceding equations gives:
Multiplying these two equations together gives,
At any time after t = t′ = 0, xx′ is not zero, so dividing both sides of the equation by xx′ results in
which is called the "Lorentz factor".
When the transformation equations are required to satisfy the light signal equations in the form and x′ = ct′, by substituting the x and x'-values, the same technique produces the same expression for the Lorentz factor.
Determining the constants of the second equation
The transformation equation for time can be easily obtained by considering the special case of a light signal, again satisfying and , by substituting term by term into the earlier obtained equation for the spatial coordinate
giving
so that
which, when identified with
determines the transformation coefficients A and B as
So A and B are the unique constant coefficients necessary to preserve the constancy of the speed of light in the primed system of coordinates.
Einstein's popular derivation
In his popular book Einstein derived the Lorentz transformation by arguing that there must be two non-zero coupling constants and such that
that correspond to light traveling along the positive and negative x-axis, respectively.
For light if and only if . Adding and subtracting the two equations and defining
gives
Substituting corresponding to and noting that the relative velocity is , this gives
The constant can be evaluated by demanding as per standard configuration.
Using group theory
From group postulates
Following is a classical derivation (see, e.g., and references therein) based on group postulates and isotropy of the space.
Coordinate transformations as a group
The coordinate transformations between inertial frames form a group (called the proper Lorentz group) with the group operation being the composition of transformations (performing one transformation after another). Indeed, the four group axioms are satisfied:
Closure: the composition of two transformations is a transformation: consider a composition of transformations from the inertial frame K to inertial frame K′, (denoted as K → K′), and then from K′ to inertial frame K′′, [K′ → K′′], there exists a transformation, [K → K′] [K′ → K′′], directly from an inertial frame K to inertial frame K′′.
Associativity: the transformations ( [K → K′] [K′ → K′′] ) [K′′ → K′′′] and [K → K′] ( [K′ → K′′] [K′′ → K′′′] ) are identical.
Identity element: there is an identity element, a transformation K → K.
Inverse element: for any transformation K → K′ there exists an inverse transformation K′ → K.
Transformation matrices consistent with group axioms
Consider two inertial frames, K and K′, the latter moving with velocity with respect to the former. By rotations and shifts we can choose the x and x′ axes along the relative velocity vector and also that the events and coincide. Since the velocity boost is along the (and ) axes nothing happens to the perpendicular coordinates and we can just omit them for brevity. Now since the transformation we are looking after connects two inertial frames, it has to transform a linear motion in (t, x) into a linear motion in coordinates. Therefore, it must be a linear transformation. The general form of a linear transformation is
where , , and are some yet unknown functions of the relative velocity .
Let us now consider the motion of the origin of the frame K′. In the K′ frame it has coordinates , while in the K frame it has coordinates . These two points are connected by the transformation
from which we get
Analogously, considering the motion of the origin of the frame K, we get
from which we get
Combining these two gives and the transformation matrix has simplified,
Now consider the group postulate inverse element. There are two ways we can go from the K′ coordinate system to the K coordinate system. The first is to apply the inverse of the transform matrix to the K′ coordinates:
The second is, considering that the K′ coordinate system is moving at a velocity v relative to the K coordinate system, the K coordinate system must be moving at a velocity −v relative to the K′ coordinate system. Replacing v with −v in the transformation matrix gives:
Now the function can not depend upon the direction of because it is apparently the factor which defines the relativistic contraction and time dilation. These two (in an isotropic world of ours) cannot depend upon the direction of . Thus, and comparing the two matrices, we get
According to the closure group postulate a composition of two coordinate transformations is also a coordinate transformation, thus the product of two of our matrices should also be a matrix of the same form. Transforming K to K′ and from K′ to K′′ gives the following transformation matrix to go from K to K′′:
In the original transform matrix, the main diagonal elements are both equal to , hence, for the combined transform matrix above to be of the same form as the original transform matrix, the main diagonal elements must also be equal. Equating these elements and rearranging gives:
The denominator will be nonzero for nonzero , because is always nonzero;
If we have the identity matrix which coincides with putting in the matrix we get at the end of this derivation for the other values of , making the final matrix valid for all nonnegative .
For the nonzero , this combination of function must be a universal constant, one and the same for all inertial frames. Define this constant as , where has the dimension of . Solving
we finally get
and thus the transformation matrix, consistent with the group axioms, is given by
If , then there would be transformations (with ) which transform time into a spatial coordinate and vice versa. We exclude this on physical grounds, because time can only run in the positive direction. Thus two types of transformation matrices are consistent with group postulates:
Galilean transformations
If then we get the Galilean-Newtonian kinematics with the Galilean transformation,
where time is absolute, , and the relative velocity of two inertial frames is not limited.
Lorentz transformations
If , then we set which becomes the invariant speed, the speed of light in vacuum. This yields and thus we get special relativity with Lorentz transformation
where the speed of light is a finite universal constant determining the highest possible relative velocity between inertial frames.
If the Galilean transformation is a good approximation to the Lorentz transformation.
Only experiment can answer the question which of the two possibilities, or , is realized in our world. The experiments measuring the speed of light, first performed by a Danish physicist Ole Rømer, show that it is finite, and the Michelson–Morley experiment showed that it is an absolute speed, and thus that .
Boost from generators
Using rapidity to parametrize the Lorentz transformation, the boost in the direction is
likewise for a boost in the -direction
and the -direction
where are the Cartesian basis vectors, a set of mutually perpendicular unit vectors along their indicated directions. If one frame is boosted with velocity relative to another, it is convenient to introduce a unit vector in the direction of relative motion. The general boost is
Notice the matrix depends on the direction of the relative motion as well as the rapidity, in all three numbers (two for direction, one for rapidity).
We can cast each of the boost matrices in another form as follows. First consider the boost in the direction. The Taylor expansion of the boost matrix about is
where the derivatives of the matrix with respect to are given by differentiating each entry of the matrix separately, and the notation indicates is set to zero after the derivatives are evaluated. Expanding to first order gives the infinitesimal transformation
which is valid if is small (hence and higher powers are negligible), and can be interpreted as no boost (the first term is the 4×4 identity matrix), followed by a small boost. The matrix
is the generator of the boost in the direction, so the infinitesimal boost is
Now, is small, so dividing by a positive integer gives an even smaller increment of rapidity , and of these infinitesimal boosts will give the original infinitesimal boost with rapidity ,
In the limit of an infinite number of infinitely small steps, we obtain the finite boost transformation
which is the limit definition of the exponential due to Leonhard Euler, and is now true for any .
Repeating the process for the boosts in the and directions obtains the other generators
and the boosts are
For any direction, the infinitesimal transformation is (small and expansion to first order)
where
is the generator of the boost in direction . It is the full boost generator, a vector of matrices , projected into the direction of the boost . The infinitesimal boost is
Then in the limit of an infinite number of infinitely small steps, we obtain the finite boost transformation
which is now true for any . Expanding the matrix exponential of in its power series
we now need the powers of the generator. The square is
but the cube returns to , and as always the zeroth power is the 4×4 identity, . In general the odd powers are
while the even powers are
therefore the explicit form of the boost matrix depends only the generator and its square. Splitting the power series into an odd power series and an even power series, using the odd and even powers of the generator, and the Taylor series of and about obtains a more compact but detailed form of the boost matrix
where is introduced for the even power series to complete the Taylor series for . The boost is similar to Rodrigues' rotation formula,
Negating the rapidity in the exponential gives the inverse transformation matrix,
In quantum mechanics, relativistic quantum mechanics, and quantum field theory, a different convention is used for the boost generators; all of the boost generators are multiplied by a factor of the imaginary unit .
From experiments
Howard Percy Robertson and others showed that the Lorentz transformation can also be derived empirically. In order to achieve this, it's necessary to write down coordinate transformations that include experimentally testable parameters. For instance, let there be given a single "preferred" inertial frame in which the speed of light is constant, isotropic, and independent of the velocity of the source. It is also assumed that Einstein synchronization and synchronization by slow clock transport are equivalent in this frame. Then assume another frame in relative motion, in which clocks and rods have the same internal constitution as in the preferred frame. The following relations, however, are left undefined:
differences in time measurements,
differences in measured longitudinal lengths,
differences in measured transverse lengths,
depends on the clock synchronization procedure in the moving frame,
then the transformation formulas (assumed to be linear) between those frames are given by:
depends on the synchronization convention and is not determined experimentally, it obtains the value by using Einstein synchronization in both frames. The ratio between and is determined by the Michelson–Morley experiment, the ratio between and is determined by the Kennedy–Thorndike experiment, and alone is determined by the Ives–Stilwell experiment. In this way, they have been determined with great precision to and , which converts the above transformation into the Lorentz transformation.
See also
Lorentz group
Noether's theorem
Poincaré group
Proper time
Relativistic metric
Spinor
Notes
References
General relativity
Special relativity | Derivations of the Lorentz transformations | [
"Physics"
] | 6,944 | [
"Special relativity",
"General relativity",
"Theory of relativity"
] |
39,503,180 | https://en.wikipedia.org/wiki/Plantazolicin | Plantazolicin (PZN) is a natural antibiotic produced by the gram-positive soil bacterium Bacillus velezensis FZB42 (previously Bacillus amyloliquefaciens FZB42). PZN has specifically been identified as a selective bactericidal agent active against Bacillus anthracis, the causative agent of anthrax. This natural product is a ribosomally synthesized and post-translationally modified peptide (RiPP); it can be classified further as a thiazole/oxazole-modified microcin (TOMM) or a linear azole-containing peptide (LAP).
The significance of PZN stems from its narrow-spectrum antibiotic activity. Most antibiotics in clinical use are broad-spectrum, acting against a wide variety of bacteria, and antibiotic resistance to these drugs is common. In contrast, PZN is antibacterial against only a small number of species, including Bacillus anthracis.
History
The genes for the biosynthesis of PZN were first reported in 2008. The natural product was then isolated in 2011 from Bacillus amyloliquefaciens. The structure of PZN was solved later that year by two independent research groups, primarily through high-resolution mass spectrometry and NMR spectroscopy. In 2013, various biomimetic chemical synthesis studies of PZN were reported, including a total synthesis.
Biosynthesis
In bacteria, plantazolicin (PZN) is synthesized first as an unmodified peptide via translation at the ribosome. A series of enzymes then chemically alter the peptide to install its post-translational modifications, including several azole heterocycles and an N-terminal amine dimethylation.
Specifically, during the biosynthesis of PZN in B. velezensis, a ribosomally-synthesized precursor peptide undergoes extensive post-translational modification, including cyclodehydrations and dehydrogenations, catalyzed by a trimeric enzyme complex. This process converts cysteine and serine/threonine residues into thiazole and (methyl)oxazole heterocycles (as seen to the right).
The exact mechanism of the association of the trimeric enzyme complex with the N-terminal leader peptide region is not yet understood; however, it is thought that the leader peptide is cleaved from the core peptide putatively by the peptidase contained in the biosynthetic gene cluster. Following leader peptide removal, the newly formed N-terminus undergoes methylation to yield an {{chem name|Nα,Nα-dimethylarginine}}. This final modification results in mature PZN.
Other organisms such as Bacillus pumilus, Clavibacter michiganensis subsp. sepedonicus, Corynebacterium urealyticum , and Brevibacterium linens '' have been identified with similar gene clusters that have the potential to produce PZN-like molecules.
References
Bactericides
Antibiotics
Peptides
Oxazolines
Oxazoles
Thiazoles | Plantazolicin | [
"Chemistry",
"Biology"
] | 667 | [
"Biomolecules by chemical classification",
"Biotechnology products",
"Bactericides",
"Antibiotics",
"Molecular biology",
"Biocides",
"Peptides"
] |
39,504,221 | https://en.wikipedia.org/wiki/Drucker%20stability | Drucker stability (also called the Drucker stability postulates) refers to a set of mathematical criteria that restrict the possible nonlinear stress-strain relations that can be satisfied by a solid material. The postulates are named after Daniel C. Drucker. A material that does not satisfy these criteria is often found to be unstable in the sense that application of a load to a material point can lead to arbitrary deformations at that material point unless an additional length or time scale is specified in the constitutive relations.
The Drucker stability postulates are often invoked in nonlinear finite element analysis. Materials that satisfy these criteria are generally well-suited for numerical analysis, while materials that fail to satisfy this criterion are likely to present difficulties (i.e. non-uniqueness or singularity) during the solution process.
Drucker's first stability criterion
Drucker's first stability criterion (first proposed by Rodney Hill and also called Hill's stability criterion) is a strong condition on the incremental internal energy of a material which states that the incremental internal energy can only increase. The criterion may be written as follows:
,
where dσ is the stress increment tensor associated with the strain increment tensor dε through the constitutive relation.
Drucker's stability postulate
Drucker's postulate is applicable to elastic-plastic materials and states that in a cycle of plastic deformation the second degree plastic work is always positive. This postulate can be expressed in incremental form as
,
where dεp is the incremental plastic strain tensor.
References
3.
External links
Chapter 3, Constitutive Models -relations between stress and strain, Applied Mechanics of Solids, Allen Bower
Continuum mechanics
Mechanics | Drucker stability | [
"Physics",
"Engineering"
] | 361 | [
"Mechanics",
"Classical mechanics",
"Mechanical engineering",
"Continuum mechanics"
] |
39,507,630 | https://en.wikipedia.org/wiki/Sample%20entropy | Sample entropy (SampEn; more appropriately K_2 entropy or Takens-Grassberger-Procaccia correlation entropy ) is a modification of approximate entropy (ApEn; more appropriately "Procaccia-Cohen entropy"), used for assessing the complexity of physiological and other time-series signals, diagnosing e.g. diseased states. SampEn has two advantages over ApEn: data length independence and a relatively trouble-free implementation. Also, there is a small computational difference: In ApEn, the comparison between the template vector (see below) and the rest of the vectors also includes comparison with itself. This guarantees that probabilities are never zero. Consequently, it is always possible to take a logarithm of probabilities. Because template comparisons with itself lower ApEn values, the signals are interpreted to be more regular than they actually are. These self-matches are not included in SampEn. However, since SampEn makes direct use of the correlation integrals, it is not a real measure of information but an approximation. The foundations and differences with ApEn, as well as a step-by-step tutorial for its application is available at.
SampEn is indeed identical to the "correlation entropy" K_2 of Grassberger & Procaccia , except that it is suggested in the latter that certain limits should be taken in order to achieve a result invariant under changes of variables. No such limits and no invariance properties are considered in SampEn.
There is a multiscale version of SampEn as well, suggested by Costa and others. SampEn can be used in biomedical and biomechanical research, for example to evaluate postural control.
Definition
Like approximate entropy (ApEn), Sample entropy (SampEn) is a measure of complexity. But it does not include self-similar patterns as ApEn does. For a given embedding dimension , tolerance and number of data points , SampEn is the negative natural logarithm of the probability that if two sets of simultaneous data points of length have distance then two sets of simultaneous data points of length also have distance . And we represent it by (or by including sampling time ).
Now assume we have a time-series data set of length with a constant time interval . We define a template vector of length , such that and the distance function (i≠j) is to be the Chebyshev distance (but it could be any distance function, including Euclidean distance). We define the sample entropy to be
Where
= number of template vector pairs having
= number of template vector pairs having
It is clear from the definition that will always have a value smaller or equal to . Therefore, will be always either be zero or positive value. A smaller value of also indicates more self-similarity in data set or less noise.
Generally we take the value of to be and the value of to be .
Where std stands for standard deviation which should be taken over a very large dataset. For instance, the r value of 6 ms is appropriate for sample entropy calculations of heart rate intervals, since this corresponds to for a very large population.
Multiscale SampEn
The definition mentioned above is a special case of multi scale sampEn with , where is called skipping parameter. In multiscale SampEn template vectors are defined with a certain interval between its elements, specified by the value of . And modified template vector is defined as
and sampEn can be written as
And we calculate and like before.
Implementation
Sample entropy can be implemented easily in many different programming languages. Below lies an example written in Python.
from itertools import combinations
from math import log
def construct_templates(timeseries_data: list, m: int = 2):
num_windows = len(timeseries_data) - m + 1
return [timeseries_data[x : x + m] for x in range(0, num_windows)]
def get_matches(templates: list, r: float):
return len(
list(filter(lambda x: is_match(x[0], x[1], r), combinations(templates, 2)))
)
def is_match(template_1: list, template_2: list, r: float):
return all([abs(x - y) < r for (x, y) in zip(template_1, template_2)])
def sample_entropy(timeseries_data: list, window_size: int, r: float):
B = get_matches(construct_templates(timeseries_data, window_size), r)
A = get_matches(construct_templates(timeseries_data, window_size + 1), r)
return -log(A / B)
An equivalent example in numerical Python.
import numpy
def construct_templates(timeseries_data, m):
num_windows = len(timeseries_data) - m + 1
return numpy.array([timeseries_data[x : x + m] for x in range(0, num_windows)])
def get_matches(templates, r):
return len(
list(filter(lambda x: is_match(x[0], x[1], r), combinations(templates)))
)
def combinations(x):
idx = numpy.stack(numpy.triu_indices(len(x), k=1), axis=-1)
return x[idx]
def is_match(template_1, template_2, r):
return numpy.all([abs(x - y) < r for (x, y) in zip(template_1, template_2)])
def sample_entropy(timeseries_data, window_size, r):
B = get_matches(construct_templates(timeseries_data, window_size), r)
A = get_matches(construct_templates(timeseries_data, window_size + 1), r)
return -numpy.log(A / B)
An example written in other languages can be found:
Matlab
R.
Rust
See also
Kolmogorov complexity
Approximate entropy
References
Statistical signal processing
Entropy
Articles with example Python (programming language) code | Sample entropy | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 1,322 | [
"Thermodynamic properties",
"Statistical signal processing",
"Physical quantities",
"Quantity",
"Entropy",
"Engineering statistics",
"Asymmetry",
"Wikipedia categories named after physical quantities",
"Symmetry",
"Dynamical systems"
] |
58,579,610 | https://en.wikipedia.org/wiki/Phosphenium | Phosphenium ions, not to be confused with phosphonium or phosphirenium, are divalent cations of phosphorus of the form [PR2]+. Phosphenium ions have long been proposed as reaction intermediates.
Synthesis
Legacy methods
The first cyclic phosphenium compounds were reported in 1972 by Suzanne Fleming and coworkers. Acyclic phosphenium compounds were synthesized by Fleming's thesis advisor Robert Parry in 1976.
Methods
Several methods exist for the preparation of two-coordinate phosphorus ions. A common method involves halide abstraction from halophosphines:
R2PCl + AlCl3 → [R2P+][]
Protonolysis of tris(dimethylamino)phosphine affords the phosphenium salt:
P(NMe2)3 + 2 HOTf → [P(NMe2)2]OTf + [HNMe2]OTf
Weakly coordinating anions are desirable. Triflic acid is often used.
N-heterocyclic phosphenium (NHP) have also been reported. Reaction of PI3 with the α-diimine yields the NHP cation by reduction of the diimine and oxidation of iodine.
Structure and bonding
According to X-ray crystallography, [(i-Pr2N)2P]+ is nearly planar consistent with sp2-hybridized phosphorus center. The planarity of the nitrogen center is consistent with the resonance of the lone pair of the nitrogen atom as a pi bond to the empty phosphorus 3p orbital perpendicular to the N−P−N plane. An idealized sp2 phosphorus center would expect an N−P−N angle of 120°. The tighter N−P−N angle observed in the crystal structure can be interpreted as the result of repulsion between the phosphorus lone pair with the bulky i-Pr2N ligands, as the and molecules have bond angles closer to 110° and 90°, respectively.
Calculations also show that the analogy to carbenes is lessened by strongly π-donating substituents. With NH2 substituents, the phosphenium cation assumes allyl character. Generalized Valence Bond (GVB) calculations of the phosphenium ions as having a singlet ground state, singlet-triplet separation increases with increasing electronegativity of the ligands. The singlet-triplet separation for and were calculated to be 20.38 and 84.00 kcal/mol, respectively. Additionally, the triplet state of the phosphenium ion displays a greater bond angle at the phosphorus. For example, the calculated bond angle of the singlet state of is approximately 94° compared to 121.5° in the triplet state. Calculated bond lengths between the two states are not significantly impacted.
Reactivity
Phosphenium is isoelectronic with singlet (Fisher) carbenes and are therefore expected to be Lewis acidic. Adducts are produced by combining [P(NMe2)2]+ and P(NMe2)3:
P(NMe2)2]+ + P(NMe2)3 → [(Me2N)3P−P(NMe2)2]+
Being electrophilic, they undergo C−H insertion reactions.
Reactions with dienes
Phosphenium intermediates are invoked as intermediates in the McCormack reaction, a method for the synthesis of organophosphorus heterocycles. An illustrative reaction involves phenyldichlorophosphine and isoprene:
Isolated phosphenium salts undergo this reaction readily.
There are few examples of reactions catalyzed by phosphenium. In 2018, Rei Kinjo and coworkers reported the hydroboration of pyridines by the NHP salt, 1,3,2-diazaphosphenium triflate. The NHP is proposed to act as a hydride transfer reagent in this reaction.
Coordination chemistry
Phosphenium ions serve as ligands in coordination chemistry. [(R2N)2PFe(CO)4]+ was prepared by two methods: the first being the abstraction of a fluoride ion from (R2N)2(F)PFe(CO)4 by PF5. The second method is the direct substitution reaction of Fe(CO)5 by the phosphenium ion [P(NR2)]+. Related complexes exist of the type Fe(CO)4L, where L = [(Me2N)2P]+, [(Et2N)2P]+, [(Me2N)(Cl)P]+, and [(en)P]+ (en = C2H4(NH2)2).
N-heterocyclic phosphenium-transition metal complexes are anticipated due to their isoelectronicity to N-heterocyclic carbenes. In 2004, Martin Nieger and coworkers synthesized two Cobalt-NHP complexes. Experimental and computation analysis of the complexes confirmed the expected L→M σ donation and the M→L π backbonding, though the phosphenium was observed to have reduced σ donor ability. It was suggested that this is due to the greater s orbital-character of the phosphorus lone pair compared to the lone pair of the analogous carbene. Additional studies of NHP ligands by Christine Thomas and coworkers in 2012, likened the phosphenium to nitrosyl. Nitrosyl is well known for its redox non-innocence, coordinating in either a bent or linear geometry that possess different L–M bonding modes. It was observed that NHPs in complex with a transition metal may have either a planar or pyramidal geometry about the phosphorus, reminiscent of the linear versus bent geometries of nitrosyl. Highly electron-rich metal complexes were observed to have pyramidal phosphorus, while less electron-rich metals showed greater phosphenium character at the phosphorus. Pyramidal phosphorus indicates significant lone pair character at phosphorus, suggesting that the L→M σ donation and the M→L π backbonding interactions have been replaced with M→L σ donation, formally oxidizing the metal center by two electrons.
Additional reading
Cycloadditions
Adducts
Electrophilic reactions
Coordination complexes
References
Cations
Phosphorus compounds | Phosphenium | [
"Physics",
"Chemistry"
] | 1,362 | [
"Matter",
"Functional groups",
"Octet-deficient functional groups",
"Cations",
"Ions"
] |
58,580,000 | https://en.wikipedia.org/wiki/Bernoulli%20polynomials%20of%20the%20second%20kind | The Bernoulli polynomials of the second kind , also known as the Fontana–Bessel polynomials, are the polynomials defined by the following generating function:
The first five polynomials are:
Some authors define these polynomials slightly differently
so that
and may also use a different notation for them (the most used alternative notation is ). Under this convention, the polynomials form a Sheffer sequence.
The Bernoulli polynomials of the second kind were largely studied by the Hungarian mathematician Charles Jordan, but their history may also be traced back to the much earlier works.
Integral representations
The Bernoulli polynomials of the second kind may be represented via these integrals
as well as
These polynomials are, therefore, up to a constant, the antiderivative of the binomial coefficient and also that of the falling factorial.
Explicit formula
For an arbitrary , these polynomials may be computed explicitly via the following summation formula
where are the signed Stirling numbers of the first kind and are the Gregory coefficients.
The expansion of the Bernoulli polynomials of the second kind into a Newton series reads
It can be shown using the second integral representation and Vandermonde's identity.
Recurrence formula
The Bernoulli polynomials of the second kind satisfy the recurrence relation
or equivalently
The repeated difference produces
Symmetry property
The main property of the symmetry reads
Some further properties and particular values
Some properties and particular values of these polynomials include
where are the Cauchy numbers of the second kind and are the central difference coefficients.
Some series involving the Bernoulli polynomials of the second kind
The digamma function may be expanded into a series with the Bernoulli polynomials of the second kind
in the following way
and hence
and
where is Euler's constant. Furthermore, we also have
where is the gamma function. The Hurwitz and Riemann zeta functions may be expanded into these polynomials as follows
and
and also
The Bernoulli polynomials of the second kind are also involved in the following relationship
between the zeta functions, as well as in various formulas for the Stieltjes constants, e.g.
and
which are both valid for and .
See also
Bernoulli polynomials
Stirling polynomials
Gregory coefficients
Bernoulli numbers
Difference polynomials
Poly-Bernoulli number
Mittag-Leffler polynomials
References
Mathematics
Polynomials
Number theory | Bernoulli polynomials of the second kind | [
"Mathematics"
] | 461 | [
"Algebra",
"Discrete mathematics",
"Number theory",
"Polynomials"
] |
58,587,709 | https://en.wikipedia.org/wiki/OR-Tools | Google OR-Tools is a free and open-source software suite developed by Google for solving linear programming (LP), mixed integer programming (MIP), constraint programming (CP), vehicle routing (VRP), and related optimization problems.
OR-Tools is a set of components written in C++ but provides wrappers for Java, .NET and Python.
It is distributed under the Apache License 2.0.
History
OR-Tools was created by Laurent Perron in 2011.
In 2014, Google's open source linear programming solver, GLOP, was released as part of OR-Tools.
The CP-SAT solver bundled with OR-Tools has been consistently winning gold medals in the MiniZinc Challenge, an international constraint programming competition.
Features
The OR-Tools supports a variety of programming languages, including:
Object-oriented interfaces for C++
A Java wrapper package
A .NET and .NET Framework wrapper package
A Python wrapper package
OR-Tools supports a wide range of problem types, among them:
Assignment problem
Linear programming
Mixed-integer programming
Constraint programming
Vehicle routing problem
Network flow algorithms
It supports the FlatZinc modeling language.
See also
COIN-OR
CPLEX
GLPK
SCIP (optimization software)
FICO Xpress
MOSEK
References
Bibliography
External links
Source code
Video introduction to OR-Tools
Mathematical optimization software
Numerical programming languages
Numerical software
Optimization algorithms and methods
Software using the Apache license | OR-Tools | [
"Mathematics"
] | 291 | [
"Numerical software",
"Mathematical software"
] |
36,630,453 | https://en.wikipedia.org/wiki/N%20%3D%208%20supergravity | In four spacetime dimensions, N = 8 supergravity, speculated by Stephen Hawking, is the most symmetric quantum field theory which involves gravity and a finite number of fields. It can be found from a dimensional reduction of eleven-dimensional supergravity by making the size of seven of the dimensions go to zero. It has eight supersymmetries, which is the most any gravitational theory can have, since there are eight half-steps between spin 2 and spin −2. (The spin 2 graviton is the particle with the highest spin in this theory.) More supersymmetries would mean the particles would have superpartners with spins higher than 2. The only theories with spins higher than 2 which are consistent involve an infinite number of particles (such as String Theory and Higher-Spin Theories). Stephen Hawking in his Brief History of Time speculated that this theory could be the Theory of Everything. However, in later years this was abandoned in favour of string theory. There has been renewed interest in the 21st century, with the possibility that this theory may be finite.
Calculations
It has been found recently that the expansion of N = 8 supergravity in terms of Feynman diagrams has shown that N = 8 supergravity is in some ways a product of two N = 4 super Yang–Mills theories. This is written schematically as:
N = 8 supergravity = (N = 4 super Yang–Mills) × (N = 4 super Yang–Mills)
This is not surprising, as N = 8 supergravity contains six independent representations of N = 4 super Yang–Mills.
Particle content
The theory contains 1 graviton (spin 2), 8 gravitinos (spin 3/2), 28 vector bosons (spin 1), 56 fermions (spin 1/2), 70 scalar fields (spin 0) where we don't distinguish particles with negative spin. These numbers are simple combinatorial numbers that come from Pascal's Triangle and also the number of ways of writing n as a sum of 8 nonnegative cubes A173681.
One reason why the theory was abandoned was that the 28 vector bosons which form an O(8) gauge group is too small to contain the standard model U(1) x SU(2) x SU(3) gauge group, which can only fit within the orthogonal group O(10).
For model building, it has been assumed that almost all the supersymmetries would be broken in nature, leaving just one supersymmetry (N = 1), although nowadays because of the lack of evidence for N = 1 supersymmetry higher supersymmetries are now being considered such as N = 2.
Connection with superstring theory
N = 8 supergravity can be viewed as the low-energy approximation of the type IIA or type IIB superstring with 6 of its dimensions compactified on a 6-torus. Equivalently, it may also be viewed as 11D M-theory with seven of its dimensions compactified on a 7-torus or 7-sphere.
See also
Pure 4D N = 1 supergravity
Double copy theory
References
Supersymmetric quantum field theory
Theories of gravity | N = 8 supergravity | [
"Physics"
] | 666 | [
"Supersymmetric quantum field theory",
"Theoretical physics",
"Theories of gravity",
"Supersymmetry",
"Symmetry"
] |
36,631,161 | https://en.wikipedia.org/wiki/Forecasting%20complexity | Forecasting complexity is a measure of complexity put forward (under the original name of) by the physicist Peter Grassberger.
It was later renamed "statistical complexity" by James P. Crutchfield and Karl Young.
References
Measures of complexity | Forecasting complexity | [
"Mathematics"
] | 49 | [
"Applied mathematics",
"Applied mathematics stubs"
] |
36,633,800 | https://en.wikipedia.org/wiki/Robbins%27%20theorem | In graph theory, Robbins' theorem, named after , states that the graphs that have strong orientations are exactly the 2-edge-connected graphs. That is, it is possible to choose a direction for each edge of an undirected graph , turning it into a directed graph that has a path from every vertex to every other vertex, if and only if is connected and has no bridge.
Orientable graphs
Robbins' characterization of the graphs with strong orientations may be proven using ear decomposition, a tool introduced by Robbins for this task.
If a graph has a bridge, then it cannot be strongly orientable, for no matter which orientation is chosen for the bridge there will be no path from one of the two endpoints of the bridge to the other.
In the other direction, it is necessary to show that every connected bridgeless graph can be strongly oriented. As Robbins proved, every such graph has a partition into a sequence of subgraphs called "ears", in which the first subgraph in the sequence is a cycle and each subsequent subgraph is a path, with the two path endpoints both belonging to earlier ears in the sequence. (The two path endpoints may be equal, in which case the subgraph is a cycle.) Orienting the edges within each ear so that it forms a directed cycle or a directed path leads to a strongly connected orientation of the overall graph.
Related results
An extension of Robbins' theorem to mixed graphs by shows that, if is a graph in which some edges may be directed and others undirected, and contains a path respecting the edge orientations from every vertex to every other vertex, then any undirected edge of that is not a bridge may be made directed without changing the connectivity of . In particular, a bridgeless undirected graph may be made into a strongly connected directed graph by a greedy algorithm that directs edges one at a time while preserving the existence of paths between every pair of vertices; it is impossible for such an algorithm to get stuck in a situation in which no additional orientation decisions can be made.
Algorithms and complexity
A strong orientation of a given bridgeless undirected graph may be found in linear time by performing a depth-first search of the graph, orienting all edges in the depth-first search tree away from the tree root, and orienting all the remaining edges (which must necessarily connect an ancestor and a descendant in the depth-first search tree) from the descendant to the ancestor. Although this algorithm is not suitable for parallel computers, due to the difficulty of performing depth-first search on them, alternative algorithms are available that solve the problem efficiently in the parallel model. Parallel algorithms are also known for finding strongly connected orientations of mixed graphs.
Applications
Robbins originally motivated his work by an application to the design of one-way streets in cities. Another application arises in structural rigidity, in the theory of grid bracing. This theory concerns the problem of making a square grid, constructed from rigid rods attached at flexible joints, rigid by adding more rods or wires as cross bracing on the diagonals of the grid. A set of added rods makes the grid rigid if an associated undirected graph is connected, and is doubly braced (remaining rigid if any edge is removed) if in addition it is bridgeless. Analogously, a set of added wires (which can bend to reduce the distance between the points they connect, but cannot expand) makes the grid rigid if an associated directed graph is strongly connected. Therefore, reinterpreting Robbins' theorem for this application, the doubly braced structures are exactly the structures whose rods can be replaced by wires while remaining rigid.
Notes
References
.
.
.
.
.
.
.
.
.
.
Graph connectivity
Theorems in graph theory | Robbins' theorem | [
"Mathematics"
] | 758 | [
"Graph connectivity",
"Graph theory",
"Theorems in discrete mathematics",
"Mathematical relations",
"Theorems in graph theory"
] |
52,375,044 | https://en.wikipedia.org/wiki/VolturnUS | The VolturnUS is a floating concrete structure that supports a wind turbine, designed by University of Maine Advanced Structures and Composites Center and deployed by DeepCwind Consortium in 2013. The VolturnUS can support wind turbines in water depths of or more.
The DeepCwind Consortium and its partners deployed a 1:8 scale VolturnUS in 2013. Efforts are now underway by Maine Aqua Ventus 1, GP, LLC, to deploy to full-scale VolturnUS structures off the coast of Monhegan Island, Maine, in the UMaine Deepwater Offshore Wind Test Site. This demonstration project, known as New England Aqua Ventus I, is planned to deploy two 6 MW wind turbines by 2020.
The University of Maine announced in September 2017 that its VolturnUS design became the first floating offshore wind turbine to meet American Bureau of Shipping requirements for floating offshore wind turbines, demonstrating the feasibility of the VolturnUS concept.
The design review was conducted against the American Bureau of Shipping (ABS) Guide for Building and Classing Floating Offshore Wind Turbine Installations.
History
North America’s first floating grid-connected wind turbine was lowered into the Penobscot River in Maine on 31 May 2013 by the University of Maine Advanced Structures and Composites Center and its partners.
The VolturnUS 1:8 was towed down the Penobscot River where it was deployed for 18 months in Castine, ME, along with a UMaine-developed floating LiDAR.
The prototype employs a 20 kW Renewegy VP-20 wind turbine with a rotor.
It is tall - that is 1:8 the scale of a 6-megawatt (MW), rotor diameter design. The VolturnUS design utilizes a concrete semi-submersible floating hull and a composite materials tower designed to reduce both capital and operation & maintenance costs, and to allow local manufacturing throughout the US and the world. The VolturnUS technology is the culmination of collaborative research and development conducted by the University of Maine-led DeepCwind Consortium.
During its deployment, it experienced numerous storm events representative of design environmental conditions prescribed by the American Bureau of Shipping Guide for Building and Classing Floating Offshore Wind Turbines, 2013.
It was taken out of the water in November 2014.
VolturnUS' floating concrete hull technology can support wind turbines in water depths of or more, and has the potential to significantly reduce the cost of offshore wind.
With 12 independent cost estimates from around the U.S. and the world, it has been found to significantly reduce costs compared to existing floating systems. The design has also received a complete third-party engineering review.
Scaling up
In June 2016, the UMaine-led New England Aqua Ventus I project won top tier status from the US Department of Energy (DOE) Advanced Technology Demonstration Program for Offshore Wind. This means that the New England Aqua Ventus project is now automatically eligible for an additional $39.9 million in construction funding from the DOE, as long as the project continues to meet its milestones. The developer asserts that the New England Aqua Ventus I project will likely become the first commercial scale floating wind project in the Americas.
U.S. Senators Susan Collins and Angus King announced in June 2016 that Maine’s New England Aqua Ventus I floating offshore wind demonstration project was selected by the U.S. Department of Energy to participate in the Offshore Wind Advanced Technology Demonstration program. The project is opposed by Senator Dow with Bill LR1613.
New England Aqua Ventus I is one of two leading projects that are each eligible for up to $39.9 million in additional funding over three years for the construction phase of the demonstration program.
In 2020, UMaine expected costs to be $74/MWh by 2027 and $57/MWh by 2032. In 2021, Maine applied for an offshore test area.
See also
Floating wind turbine
DeepCwind Consortium
UMaine Deepwater Offshore Wind Test Site
Wind Power in Maine
Offshore wind power in the United States
List of offshore wind farms in the United States
References
External links
Aqua Ventus I, wind farm website
Advance Structures and Composites Center - VoluturnUS
Wind power in Maine
Offshore engineering
Renewable energy policy in the United States
University of Maine
Floating wind turbines | VolturnUS | [
"Engineering"
] | 858 | [
"Construction",
"Floating wind turbines",
"Offshore engineering"
] |
53,758,047 | https://en.wikipedia.org/wiki/Radiochromic%20film | Radiochromic film is a type of self-developing film typically used in the testing and characterisation of radiographic equipment such as CT scanners and radiotherapy linacs. The film contains a dye which changes colour when exposed to ionising radiation, allowing the level of exposure and beam profile to be characterised. Unlike x-ray film no developing process is required and results can be obtained almost instantly, while it is insensitive to visible light (making handling easier).
Mechanism
For medical dosimetry "gafchromic dosimetry film (...) is arguably the most widely used commercial product". Several types of gafchromic film are marketed with differing properties. One type, MD-55, is made up of layers of polyester substrate with active emulsion layers adhered (approximately 16μm thick). The active layer consists of polycrystalline, substituted-diacetylene and the colour change occurs due to "progressive 1,4-trans additions as polyconjugations along the ladder-like polymer chains".
Usage
Radiochromic films have been in general use since the late 1960s, although the general principle has been known about since the 19th century.
Profiling
Radiochromic film can provide high spatial resolution information about the distribution of radiation. Depending on the scanning technique, sub-millimetre resolution can be achieved.
Dosimetry
Unlike many other types of radiation detector, radiochromic film can be used for absolute dosimetry where information about absorbed dose is obtained directly. It is typically scanned, for example using a standard flat bed scanner, to provide accurate quantification of the optical density and therefore degree of exposure. Gafchromic film has been shown to provide measurements accurate to 2% over doses of 0.2–100 Gray (Gy).
References
Further reading
Medical physics
Ionising radiation detectors
X-ray instrumentation | Radiochromic film | [
"Physics",
"Technology",
"Engineering"
] | 389 | [
"Applied and interdisciplinary physics",
"Radioactive contamination",
"Measuring instruments",
"X-ray instrumentation",
"Ionising radiation detectors",
"Medical physics"
] |
53,759,905 | https://en.wikipedia.org/wiki/Williams%20spray%20equation | In combustion, the Williams spray equation, also known as the Williams–Boltzmann equation, describes the statistical evolution of sprays contained in another fluid, analogous to the Boltzmann equation for the molecules, named after Forman A. Williams, who derived the equation in 1958.
Mathematical description
The sprays are assumed to be spherical with radius , even though the assumption is valid for solid particles(liquid droplets) when their shape has no consequence on the combustion. For liquid droplets to be nearly spherical, the spray has to be dilute(total volume occupied by the sprays is much less than the volume of the gas) and the Weber number , where is the gas density, is the spray droplet velocity, is the gas velocity and is the surface tension of the liquid spray, should be .
The equation is described by a number density function , which represents the probable number of spray particles (droplets) of chemical species (of total species), that one can find with radii between and , located in the spatial range between and , traveling with a velocity in between and , having the temperature in between and at time . Then the spray equation for the evolution of this density function is given by
where
is the force per unit mass acting on the species spray (acceleration applied to the sprays),
is the rate of change of the size of the species spray,
is the rate of change of the temperature of the species spray due to heat transfer,
is the rate of change of number density function of species spray due to nucleation, liquid breakup etc.,
is the rate of change of number density function of species spray due to collision with other spray particles.
A simplified model for liquid propellant rocket
This model for the rocket motor was developed by Probert, Williams and Tanasawa. It is reasonable to neglect , for distances not very close to the spray atomizer, where major portion of combustion occurs. Consider a one-dimensional liquid-propellent rocket motor situated at , where fuel is sprayed. Neglecting (density function is defined without the temperature so accordingly dimensions of changes) and due to the fact that the mean flow is parallel to axis, the steady spray equation reduces to
where is the velocity in direction. Integrating with respect to the velocity results
The contribution from the last term (spray acceleration term) becomes zero (using Divergence theorem) since when is very large, which is typically the case in rocket motors. The drop size rate is well modeled using vaporization mechanisms as
where is independent of , but can depend on the surrounding gas. Defining the number of droplets per unit volume per unit radius and average quantities averaged over velocities,
the equation becomes
If further assumed that is independent of , and with a transformed coordinate
If the combustion chamber has varying cross-section area , a known function for and with area at the spraying location, then the solution is given by
.
where are the number distribution and mean velocity at respectively.
See also
Boltzmann equation
Spray (liquid drop)
Liquid-propellant rocket
Smoluchowski coagulation equation
References
Combustion
Eponymous equations of physics
Fluid dynamics | Williams spray equation | [
"Physics",
"Chemistry",
"Engineering"
] | 626 | [
"Equations of physics",
"Chemical engineering",
"Eponymous equations of physics",
"Combustion",
"Piping",
"Fluid dynamics"
] |
53,763,419 | https://en.wikipedia.org/wiki/Amable%20Li%C3%B1%C3%A1n | Amable Liñán Martínez (born 1934 in Noceda de Cabrera, Castrillo de Cabrera, León, Spain) is a Spanish aeronautical engineer working in the field of combustion.
Biography
He holds a PhD in Aeronautical Engineering from the Technical University of Madrid, advised by Gregorio Millán Barbany and Degree of Aeronautical Engineer from the Caltech advised by Frank E. Marble.
He is currently Professor of Fluid Mechanics and professor emeritus at the Higher Technical School of Aeronautical Engineers of the Polytechnic University of Madrid (attached to the Department of Motorcycle and Thermofluidodynamics of said school). He has taught at universities in California, Michigan and Princeton University in the United States and in Marseilles in France, among others. Since 1997 he is an adjunct professor at Yale University.
Research
He has focused his research studies on the basic problems of combustion, both reactor and planetary probe dynamics, in the latter case working directly for NASA and the European Space Agency.
The diffusion flame structure in counterflow wa analyzed by him in 1974 through activation-energy asymptotics.
Publications
He is the author of several books and scientific research.
Honors
In 1989 he was elected member of the Royal Academy of Exact, Physical and Natural Sciences. He is also a member of the Royal Academy of Engineering of Spain, France and Mexico. He is also a member of the scientific board of the IMDEA Energy Institute. He is also an elected foreign member of National Academy of Engineering for discoveries using asymptotic analyses in combustion and for contributions to advance engineering science. In 2007 he received the "Miguel Catalán" Research Award from the Community of Madrid and was awarded in 1993 with the Prince of Asturias Award for Scientific and Technical Research. A workshop in honor of Liñán's work was conducted in 2004 and the workshop papers are published in a book titled Simplicity, rigor and relevance in fluid mechanics : a volume in honor of Amable Liñán, CIMNE, (2004).
See also
References
External links
Spanish engineers
Fluid dynamicists
1934 births
Living people
California Institute of Technology alumni
Members of the United States National Academy of Engineering
Fellows of the Combustion Institute | Amable Liñán | [
"Chemistry"
] | 435 | [
"Fellows of the Combustion Institute",
"Combustion",
"Fluid dynamicists",
"Fluid dynamics"
] |
53,764,217 | https://en.wikipedia.org/wiki/Three-dimensional%20X-ray%20diffraction | Three-dimensional X-ray diffraction (3DXRD) is a microscopy technique using hard X-rays (with energy in the 30-100 keV range) to investigate the internal structure of polycrystalline materials in three dimensions. For a given sample, 3DXRD returns the shape, juxtaposition, and orientation of the crystallites ("grains") it is made of. 3DXRD allows investigating micrometer- to millimetre-sized samples with resolution ranging from hundreds of nanometers to micrometers. Other techniques employing X-rays to investigate the internal structure of polycrystalline materials include X-ray diffraction contrast tomography (DCT) and high energy X-ray diffraction (HEDM).
Compared with destructive techniques, e.g. three-dimensional electron backscatter diffraction (3D EBSD), with which the sample is serially sectioned and imaged, 3DXRD and similar X-ray nondestructive techniques have the following advantages:
They require less sample preparation, thus limiting the introduction of new structures in the sample.
They can be used to investigate larger samples and to employ more complicated sample environments.
They enable to study how 3D grain structures evolve with time.
Since measurements do not alter the sample, different types of analysis can be made in sequence.
Experimental setup
3DXRD measurements are performed using various experimental geometries. The classical 3DXRD setup is similar to the conventional tomography setting used at synchrotrons: the sample, mounted on a rotation stage, is illuminated using quasi-parallel monochromatic X-ray beam. Each time a certain grain within the sample satisfies the Bragg condition, a diffracted beam is generated. This signal is transmitted through the sample and collected by two-dimensional detectors. Since different grains satisfy the Bragg condition at different angles, the sample is rotated to probe the complete sample structure. Crucial for 3DXRD is the idea to mimic a three-dimensional detector by positioning a number of two-dimensional detectors at different distances from the centre of rotation of the sample, and exposing these either simultaneously (many detectors are semi-transparent to hard X-rays) or at different times.
A 3DXRD microscope is installed at the Materials Science beamline of the ESRF.
Software
To determine the crystallographic orientation of the grains in the considered sample, the following software packages are in use: Fable and GrainSpotter. Reconstructing the 3D shape of the grains is nontrivial and three approaches are available to do so, respectively based on simple back-projection, forward projection, algebraic reconstruction technique and Monte Carlo method-based reconstruction.
Applications
With 3DXRD, it is possible to study in situ the time evolution of materials under different conditions. Among others, the technique has been used to map the elastic strains and stresses in a pre-strained nickel-titanium wire.
Related techniques
The scientists involved in developing 3DXRD contributed to the development of three other three-dimensional non-destructive techniques for the material sciences, respectively using electrons and neutrons as a probe: three-dimensional orientation mapping in the transmission electron microscope (3D-OMiTEM), time-of-flight 3D neutron diffraction for multigrain crystallography (ToF 3DND) and laue 3D neutron diffraction (Laue3DND).
Using a system of lenses, the synchrotron technique dark-field X-ray microscopy (DFXRM) extends the capabilities of 3DXRD, allowing to focus on a deeply embedded single grain and to reconstruct its 3D structure and its crystalline properties. DFXRM is under development at the European Synchrotron Research Facility (ESRF), beamline ID06.
In a laboratory setting, 3D grain maps using X-rays as a probe can be obtained using laboratory diffraction contrast tomography (LabDCT), a technique derived from 3DXRD.
See also
Diffraction
X-ray diffraction computed tomography (XRD-CT)
Synchrotron
References
Microscopy
X-ray instrumentation | Three-dimensional X-ray diffraction | [
"Chemistry",
"Technology",
"Engineering"
] | 847 | [
"X-ray instrumentation",
"Measuring instruments",
"Microscopy"
] |
53,767,303 | https://en.wikipedia.org/wiki/Pseudo-response%20regulator | Pseudo-response regulator (PRR) refers to a group of genes that regulate the circadian oscillator in plants. There are four primary PRR proteins (PRR9, PRR7, PRR5 and TOC1/PRR1) that perform the majority of interactions with other proteins within the circadian oscillator, and another (PRR3) that has limited function. These genes are all paralogs of each other, and all repress the transcription of Circadian Clock Associated 1 (CCA1) and Late Elongated Hypocotyl (LHY) at various times throughout the day. The expression of PRR9, PRR7, PRR5 and TOC1/PRR1 peak around morning, mid-day, afternoon and evening, respectively. As a group, these genes are one part of the three-part repressilator system that governs the biological clock in plants.
Discovery
Multiple labs identified the PRR genes as parts of the circadian clock in the 1990s. In 2000, Akinori Matsushika, Seiya Makino, Masaya Kojima, and Takeshi Mizuno were the first to understand PRR genes as pseudo-response repressor genes rather than as response regulator (ARR) genes. The factor that distinguishes PRR from ARR genes is the lack of a phospho-accepting aspartate site that characterizes ARR proteins. Though their research that discovered PRR genes was primarily hailed during the early 2000s as informing the scientific community about the function of TOC1 (named APRR1 by the Mizuno lab), an additional pseudo-response regulator in the Arabidopsis thaliana biological clock, the information about PRR genes that Matsushika and his team found deepened scientific understanding of circadian clocks in plants and led other researchers to hypothesize about the purpose of the PRR genes. Though current research has identified TOC1, PRR3, PRR5, PRR7, and PRR9 as of importance to the A. thaliana circadian clock mechanism, Matsushika et al. first categorized PRR genes into two subgroups (APRR1 and APRR2, the A stands for Arabidopsis) due to two differing amino acid structures. The negative feedback loops including PRR genes, proposed by Mizuno, were incorporated into a complex repressilator circuit by Andrew Millar’s lab in 2012. The conception of the plant biological clock as made up of interacting negative feedback loops is unique in comparison to mammal and fungal circadian clocks which contain autoregulatory negative feedback loops with positive and negative elements (see "Transcriptional and non-transcriptional control on the Circadian clock page).
Function and Interactions
PRR3, PRR5, PRR7 and PRR9 participate in the repressilator of a negative autoregulatory feedback loop that synchronizes to environmental inputs. The repressilator has a morning, evening, and night loop that are regulated in part by the pseudo-response regulator proteins' interactions with CCA1 and LHY. CCA1 and LHY exhibit peak binding to PRR9, PRR7, and PRR5 in the morning, evening, and night, respectively.
PRR3 and PRR5
When phosphorylated by an unknown kinase, PRR5 and PRR3 proteins demonstrate increased binding to TIMING OF CAB2 EXPRESSION 1 ( TOC1). This interaction stabilizes both TOC1 and PRR5 and prevents their degradation by the F-box protein ZEITLUPE (ZTL). Through this mechanism, PRR5 is indirectly activated by light, as ZTL is inhibited by light. Additionally, PRR5 contributes to the transcriptional repression of the genes encoding the single MYB transcription factors CCA1 and LHY.
PRR7 and PRR9
Two single MYB transcription factors, CCA1 and LHY, activate expression of PRR7 and PRR9. In turn, PRR7 and PRR9 repress CCA1 and LHY through the binding of their promoters. This interaction forms the morning loop of the repressilator of the biological clock in A. thaliana. Chromatin immunoprecipitation demonstrates that LUX binds to the PRR9 promoter to repress it. Additionally, ELF3 has been shown to activate PRR9 and repress CCA1 and LHY. PRR9 is also activated by alternative RNA splicing. When PRMT5 (a methylation factor) is prevented from methylating intron 2 of PRR9, a frameshift resulting in premature truncation occurs.
PRR7 and PRR9 also play a role in the entrainment of A. thaliana to a temperature cycle. Double-mutant plants with inactivated PRR7 and PRR9 exhibit extreme period lengthening at high temperatures but show no change in period at low temperatures. However, the inactivation of CCA1 and LHY in the PRR7/PRR9 loss-of-function mutants shows no change in period at high temperatures—this suggests that PRR7 and PRR9 are acting by overcompensation.
Interactions Within Arabidopsis
In A. thaliana, the main feedback loop is proposed to involve a transcriptional regulation between several proteins. The three main components of this loop are TOC1 (also known as PRR1), CCA1 and LHY. Each individual component peaks in transcriptions at different times of day. PRR 9, 7 and 5 each significantly reduce the transcription levels of CCA1 and LHY. In the opposite manner, PRR 9 and 7 slightly increase the transcription levels of TOC1. The Constans (CO) is also indirectly regulated by the PRR proteins as well by setting up the molecular mechanism to dictate the photosensitive period in the afternoon. PRRs are also known to stabilize CO at certain times of day to mediate its accumulation. This results in the regulation of early flowering in shorter photoperiods, making light sensitivity and control of flowering time important functions of the PRR class.
Homologs
Paralogs
PPR3, PRR5, PRR7, and PRR9 are all paralogs of each other. They have similar structure, and all repress the transcription of CCA1 and LHY. Additionally, they are all characterized by their lack of a phospho-accepting aspartate site. These genes are also paralogs to TOC1, which is alternatively called PRR1.
Orthologs
Several pseudo-response regulators have been found in Selaginella, but their function has not yet been explored.
Mutants
As PRR is a family of genes, several rounds mutant screening have been performed to identify each possible phenotype.
Rhythmicity Phenotype
In regards to rhythmicity of the clock in a free running setting PRR9 and PRR5 are associated with longer and shorter periods respectively. For each gene, the double mutant with PRR7 exacerbates observed trends in rhythmicity. The triple mutant renders the plant arrhythmic.
Flowering Time Phenotype
In terms of flowering time in long day conditions, all mutants made the observed flowering late, with PRR7 significantly more late in comparison to the other mutants. All double mutants with PRR7 saw much later flowering time than the PRR5/PRR9 mutant.
Light Sensitivity Phenotype
With regard to light sensitivity, particularly in red light which is associated with hypocotyl lengthening, all PRR mutants were observed to be hypo-sensitive with PRR9 showing to be less sensitive. All the double mutants were equal in hyposensitivity as the PRR5 or PRR7 mutants; the triple mutant is extremely hypo-sensitive.
Future research
Recent research has showed that expression of clock genes show tissue-specificity. Learning about how, when, and why specific tissues show certain peaks in clock genes like PRR can reveal more about the subtle nuances of each gene within the repressilator.
Few investigations into the circadian oscillator mechanisms in species other than A. thaliana have taken place; learning which genes are responsible for clock functions in other species will give more insight into the similarities and differences in clocks across plant species.
The mechanistic details of each step in the plant biological clock repressilator system have yet to be fully understood. An understanding of these will give knowledge of clock function and, across species, increase understanding of the ecological and evolutionary functions of circadian oscillators.
Additionally, identifying direct targets of PRR5, PRR7 and PRR9 that are not CCA1 and LHY will provide information about the molecular links from the PRRs to output genes like the flowering pathway and metabolism in mitochondria, which are CCA1-independent.
See also
TOC1 (gene)
CCA1
Circadian Clock
References
Circadian rhythm
Genes
Molecular biology
Plant physiology | Pseudo-response regulator | [
"Chemistry",
"Biology"
] | 1,879 | [
"Plant physiology",
"Behavior",
"Plants",
"Circadian rhythm",
"Molecular biology",
"Biochemistry",
"Sleep"
] |
53,767,947 | https://en.wikipedia.org/wiki/2%2C2%27%2C2%27%27-Nitrilotriacetonitrile | Nitrilotriacetonitrile (NTAN) is a precursor for nitrilotriacetic acid (NTA, a biodegradable complexing agent and building block for detergents), for tris(2-aminoethyl)amine (a tripodal tetradentate chelating agent known under the abbreviation tren) and for the epoxy resin crosslinker aminoethylpiperazine.
Production
The synthesis of nitrilotriacetonitrile is based on the basic building blocks ammonia, formaldehyde and hydrogen cyanide, which are reacted (via the triple cyanomethylation of the ammonia) in acidic aqueous medium in discontinuous or continuous processes.
Ammonia is introduced as a gas, in form of hexamethylenetetramine or as ammonium sulfate together with formaldehyde as aqueous solution (usually 37% by weight) at pH values <2 and treated with aqueous prussic acid solution or liquid hydrogen cyanide at temperatures around 100 °C. Prussic acid is used directly from the Andrussow process or the BMA process of Evonik Degussa without pre-purification if necessary. When the mother liquors are returned, yields of more than 90% are achieved.
Problematic, particularly in the case of a continuous process, is the tendency of NTAN to precipitate at temperatures below 90 °C which can lead to clogging of tube reactors and conduits and thermal runaway of the reaction.
Properties
Nitrilotriacetonitrile is a colorless and odorless solid which dissolves hardly in water but dissolves well in nitromethane and acetone.
Use
Nitrilotriacetonitrile can be homopolymerized or copolymerized with iminodiacetonitrile in the melt in the presence of basic catalysts such as sodium methoxide to form dark-colored solid polymers which can be carbonized to form nitrogen-containing and electrically conductive polymers at temperatures above 1000 °C. The products obtained have not found application as conductive polymers.
The hydrogenation of NTAN first converts a cyano group into an imino group which attacks a cyano group (which are adjacent and sterically suitable for forming a six-membered ring) rather than being further hydrogenated to the primary amino group. The end product of the catalytic hydrogenation of nitrilotriacetonitrile is therefore 1-(2-aminoethyl)piperazine.
If the catalytic hydrogenation of NTAN is carried out with e. g. Raney nickel in the presence of a large excess of ammonia, it gives tris(2-aminoethyl)amine.
Tris(2-aminoethyl)amine is used as a tetrazident complexing agent (abbreviated as "tren"), which forms stable chelates, particularly with divalent and trivalent transition metal ions.
Nitrilotriacetonitrile reacts with methanal at pH 9.5 to give 2,2-dihydroxymethyl-nitrilotriacetonitrile, which is hydrolyzed with sodium hydroxide solution at 100 °C to give the trisodium salt of 2-hydroxymethylserine-N,N-diacetic acid, from which the free acid can be obtained by acidification in 51% yield.
The compound is suitable as a complexing agent for heavy metal ions or alkaline earth metal ions, as a stabilizer for bleaching agents (e.g. for sodium perborate, in solid detergent preparations) and as a builder in detergents for inhibiting the formation of incrustations in textiles during laundering.
The hydrolysis of nitrilotriacetonitrile with water in concentrated sulfuric acid yields under gentle conditions practically quantitatively nitrilotriacetamide, which has been investigated as a neutral tetradentate ligand for metal complexation. At elevated temperature, 3,5-dioxopiperazine-1-acetamide is formed by ring closure, which can be quantitatively converted into the nitrilotriacetamide after neutralization and heating with excess aqueous ammonia.
Nitrilotriacetonitrile serves mainly as a raw material for the production of the biodegradable, but carcinogen suspected complexing agent nitrilotriacetic acid by acid or base-catalyzed hydrolysis of the cyano groups.
Undesirable residual contents of cyanide ions in the hydrolyzate can be removed by post-treatment with oxidizing agents such as sodium hypochlorite at pH 8.
References
Amines
Nitriles | 2,2',2''-Nitrilotriacetonitrile | [
"Chemistry"
] | 998 | [
"Amines",
"Nitriles",
"Bases (chemistry)",
"Functional groups"
] |
42,260,413 | https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Nicolas%20number | In number theory, an Erdős–Nicolas number is a number that is not perfect, but that equals one of the partial sums of its divisors.
That is, a number is an Erdős–Nicolas number when there exists another number such that
The first ten Erdős–Nicolas numbers are
24, 2016, 8190, 42336, 45864, 392448, 714240, 1571328, 61900800 and 91963648. ()
They are named after Paul Erdős and Jean-Louis Nicolas, who wrote about them in 1975.
See also
Descartes number, another type of almost-perfect numbers
References
Integer sequences | Erdős–Nicolas number | [
"Mathematics"
] | 140 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Number theory stubs",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
42,261,806 | https://en.wikipedia.org/wiki/Slug%20flow | In fluid mechanics, slug flow in liquid–gas two-phase flow is a type of flow pattern. Lighter, faster moving continuous fluid which contains gas bubbles - pushes along a disperse gas bubble. Pressure oscillations within piping can be caused by slug flow. The word slug usually refers to the heavier, slower moving fluid, but can also be used to refer to the bubbles of the lighter fluid.
This flow is characterised by the intermittent sequence of liquid slugs followed by longer gas bubbles flowing through a pipe. The flow regime is similar to plug flow, but the bubbles are larger and move at a greater velocity.
Examples
Production of hydrocarbon in wells and their transportation in pipelines;.
Production of steam and water in geothermal power plants.
Boiling and condensation in liquid-vapor systems of thermal power plants;
Emergency core cooling of nuclear reactors.
Heat and mass transfer between gas and liquid in chemical reactors.
See also
Slip ratio (gas–liquid flow)
Slugcatcher
Plug flow
Two-phase flow
Multiphase flow
References
Fluid dynamics | Slug flow | [
"Chemistry",
"Engineering"
] | 213 | [
"Piping",
"Chemical engineering",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
42,263,535 | https://en.wikipedia.org/wiki/Ste5 | Ste5 is a MAPK scaffold protein involved in the mating of yeast. The active complex is formed by interactions with the MAPK Fus3, the MAPK kinase (MAPKK) Ste7, and the MAPKK kinase Ste11. After the induction of mating by an appropriate mating pheromone (either a-factor or α –factor) Ste5 and its associated proteins are recruited to the membrane.
Ste4 helps to recruit Ste5, Ste4 is not required for the attachment of Ste5 to the membrane. Membrane association depends on a pleckstrin homology domain, as well as an amphipathic alpha-helical domain in the amino terminus.
During mating, Fus3 MAPK and Ptc1 phosphatase compete to control 4 phosphorylation sites on the Ste5 scaffold. When all of 4 sites have been dephosphorylated by Ptc1, Fus3 is released and becomes active.
Ste5 plays 2 main roles in the mating signal pathway:
Binds the components of the MAPK cascade and holds them in an active complex
Associates with the membrane, bringing the kinases to the membrane and promoting amplification of the signal (concentrating the bound kinases). Ste5 remains stably bound at the plasma membrane.
Ste5 oligomerization is very important for stable membrane recruitment. In one model, the activation of the pathway occurs at the same time that Ste5 is converted from a less active, closed form of Ste5 to an active Ste5 dimer that can bind to the beta-gamma subunit of the heterotrimeric G-protein and form a lattice for the MAPK cascade to assemble on.
Not only does Ste5 contribute to propagation of the pheromone signal, but it is also involved in down regulating of signalling. It stimulates autophosphorylation of Fus3, which results in phosphorylation of Ste5, causing a downregulation in signalling.
Ste5 also catalytically unlocks Fus3 (but not its homologue Kss1) for phosphorylation by Ste7. Both this catalytically active Ste5 domain as well as Ste7 are required for full Fus3 activation, which explains why Fus3 is activated by only the mating pathway, and remains inactive during other pathways which also utilize Ste7.
Ste5 can be localized to the cytoplasm, mating projection tip, nucleus, and plasma membrane.
Biological Processes
Ste5 is involved in the following biological processes:
Invasive growth in response to glucose limitation
Negative regulation of the MAPK cascade
Pheromone-dependent signal transduction involved in conjugation with cellular fusion
Positive regulation of protein phosphorylation
Regulation of RNA-mediated transposition
References
Cell signaling | Ste5 | [
"Chemistry",
"Biology"
] | 574 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
42,264,469 | https://en.wikipedia.org/wiki/WHI3 | WHI3 is a developmental regulator in budding yeast. It influences cell size and the cell cycle by binding CLN3 mRNA and inhibiting its translation. This, in turn, inhibits the G1/S transition.
Function
WHI3 mediates many, often vital, processes such as the cell cycle, meiosis, filamentous growth and mating.
Regulation of the cell cycle is done by acting on the cyclin CLN3, a protein crucial to the G1/S transition in budding yeast. WHI3 acts by binding CLN3 mRNA, and then co-localizes, to form cytoplasmic foci. This locally restricts synthesis of the short-lived CLN3 protein, thus limiting its range. During G1, yeast has the ability to choose from a multitude of developmental options: meiosis, filamentation and mating. This is possible only when the cell arrests in G1, allowing it to continue down a different pathway.
It is also known that WHI3 directly interacts with Cdc28, and is needed to localize it to the cytoplasm during early G1. WHI3 forms a complex with the CLN3 protein, which is needed for the accumulation of Cdc28. In late G1, however, Cdc28 has been observed to localize to the nucleus.
Another, more recently discovered function of WHI3 is encoding for memory in budding yeast cells. Budding yeast is capable of both sexual and asexual reproduction. During sexual reproduction, two yeast cells signal their presence by diffusing pheromones, and it has been shown that when a cell is exposed to mating pheromones but does not perform mating, it "remembers" the event and is less likely to undergo mating afterwards. When exposed to pheromones, yeast will undergo cell-cycle arrest and attempt to mate, however, within the first three hours it will escape the arrest, and the previously inhibited CLN3 will resume activity. The WHI3 protein then aggregate and form a super-assembly, which is inactive and partially insoluble. This then forces the cell to continue with budding, since it is now conditioned against cell-cycle arrest. The daughter cells obtained from this budding, however, are not conditioned against mating, unlike the mother: the WHI3 aggregates have been shown to localize within the mother cell. This results in a mother cell retaining the memory of the previous encounter over multiple generations, while the new daughter cells are still responsive to mating cues.
Structure
Using the WHI3 sequence, the protein is predicted to have a mass of 71,257 kD, an isoelectric point of 8.65, and a codon bias of 0.13.
It also has been shown to have an RNA binding motif, similar to RNP-1 and RNP-2.
Its Cdc28-recruitment region has been shown to be on its N-terminal, spanning amino acids 121–220.
References
Proteins | WHI3 | [
"Chemistry"
] | 622 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
42,268,134 | https://en.wikipedia.org/wiki/Calcium%20looping | Calcium looping (CaL), or the regenerative calcium cycle (RCC), is a second-generation carbon capture technology. It is the most developed form of carbonate looping, where a metal (M) is reversibly reacted between its carbonate form (MCO3) and its oxide form (MO) to separate carbon dioxide from other gases coming from either power generation or an industrial plant. In the calcium looping process, the two species are calcium carbonate (CaCO3) and calcium oxide (CaO). The captured carbon dioxide can then be transported to a storage site, used in enhanced oil recovery or used as a chemical feedstock. Calcium oxide is often referred to as the sorbent.
Calcium looping is being developed as it is a more efficient, less toxic alternative to current post-combustion capture processes such as amine scrubbing. It also has interesting potential for integration with the cement industry.
Basic concept
CaCO3 ←→ CaO + CO2 ΔH = +178 kJ/mol
There are two main steps in CaL:
Calcination: Solid calcium carbonate is fed into a calciner, where it is heated to 850-950 °C to cause it to thermally decompose into gaseous carbon dioxide and solid calcium oxide (CaO). The almost-pure stream of CO2 is then removed and purified so that it is suitable for storage or use. This is the 'forward' reaction in the equation above.
Carbonation: The solid CaO is removed from the calciner and fed into the carbonator. It is cooled to approximately 650 °C and is brought into contact with a flue gas containing a low to medium concentration of CO2. The CaO and CO2 react to form CaCO3, thus reducing the CO2 concentration in the flue gas to a level suitable for emission to the atmosphere. This is the 'backward' reaction in the equation above.
Note that carbonation is calcination in reverse.
Whilst the process can be theoretically performed an infinite number of times, the calcium oxide sorbent degrades as it is cycled. For this reason, it is necessary to remove (purge) some of the sorbent from the system and replace it with fresh sorbent (often in the carbonate form). The size of the purge stream compared with the amount of sorbent going round the cycle affects the process considerably.
Background
In the Ca-looping process, a CaO-based sorbent, typically derived from limestone, reacts via the reversible reaction described in Equation () and is repeatedly cycled between two vessels.
The forward, endothermic step is called calcination while the backward, exothermic step is carbonation.
A typical Ca-looping process for post-combustion CO2 capture is shown in Figure 1, followed by a more detailed description.
Flue gas containing CO2 is fed to the first vessel (the carbonator), where carbonation occurs. The CaCO3 formed is passed to another vessel (the calciner). Calcination occurs at this stage, and the regenerated CaO is quickly passed back to the carbonator, leaving a pure CO2 stream behind. As this cycle continues, CaO sorbent is constantly replaced by fresh (reactive) sorbent. The highly concentrated CO2 from the calciner is suitable for sequestration, and the spent CaO has potential uses elsewhere, most notably in the cement industry. The heat necessary for calcination can be provided by oxy-combustion of coal below.
Oxy-combustion of coal: Pure oxygen rather than air is used for combustion, eliminating the large amount of nitrogen in the flue-gas stream. After particulate matter is removed, flue gas consists only of water vapor and CO2, plus smaller amounts of other pollutants. After compression of the flue gas to remove water vapor and additional removal of air pollutants, a nearly pure CO2 stream suitable for storage is produced.
The carbonator's operating temperature of 650-700 °C is chosen as a compromise between higher equilibrium (maximum) capture at lower temperatures due to the exothermic nature of the carbonation step, and a decreased reaction rate. Similarly, the temperature of >850 °C in the calcinator strikes a balance between increased rate of calcination at higher temperatures and reduced rate of degradation of CaO sorbent at lower temperatures.
Process description
CaL is usually designed using a dual fluidised bed system to ensure sufficient contact between the gas streams and the sorbent. The calciner and carbonator are fluidised beds with associated process equipment for separating the gases and solids attached (such as cyclones). Calcination is an endothermic process and as such requires the application of heat to the calciner. The opposite reaction, carbonation, is exothermic and heat must be removed. Since the exothermic reaction happens at about 650 °C and the endothermic reaction at 850-950 °C, the heat from the carbonator cannot be directly used to heat the calciner.
The fluidisation of the solid bed in the carbonator is achieved by passing the flue gas through the bed. In the calciner, some of the recovered CO2 is recycled through the system. Some oxygen may also be passed through the reactor if fuel is being burned in the calciner to provide energy.
Provision of energy to the calciner
Heat can be provided for the endothermic calcination step either directly or indirectly.
Direct provision of heat involves the combustion of fuel in the calciner itself (fluidised bed combustion). This is generally assumed to be done under oxy-fuel conditions; i.e. oxygen rather than air is used to burn the fuel to prevent dilution of the CO2 with nitrogen. The provision of oxygen for the combustion uses much electricity and is associated with high investment costs. Other air separation processes are being developed.
The penalties of calcium looping may be reduced by providing the heat for the calcination indirectly. This can be done in one of the following ways:
Combustion of fuel in an external chamber and conduction of energy in to the vessel
Combustion of fuel in an external chamber and use of a heat transfer medium.
Indirect methods are generally less efficient but do not require the provision of oxygen for combustion within the calciner to prevent dilution. The flue gas from the combustion of fuel in the indirect method could be mixed with the flue gas from the process that the CaL plant is attached to and passed through the carbonator to capture the CO2.
One efficient way of transferring heat into the calciner is by means of heat pipes. The indirectly heated calcium looping (IHCaL) using heat pipes has high potential to decarbonize the lime and cement industry. The deployment of this technology with refuse-derived fuels would allow to achieve net negative CO2 emissions.
Recovery of energy from the carbonator
Although the heat from the carbonator is not at a high enough temperature to be used in the calciner, the high temperatures involved (>600 °C) mean that a relatively efficient Rankine cycle for generating electricity can be operated.
Note that the waste heat from the market-leading amine scrubbing CO2 capture process is emitted at a maximum of 150 °C. The low temperature of this heat means that it contains much less exergy and can generate much less electricity through a Rankine or organic Rankine cycle.
This electricity generation is one of the main benefits of CaL over lower-temperature post-combustion capture processes as the electricity is an extra revenue stream (or reduces costs).
Sorbent degradation
It has been shown that the activity of the sorbent reduces quite markedly in laboratory, bench-scale and pilot plant tests. This degradation has been attributed to three main mechanisms, as shown below.
Attrition
Calcium oxide is friable, that is, quite brittle. In fluidised beds, the calcium oxide particles can break apart upon collision with the other particles in the fluidised bed or the vessel containing it. The problem seems to be greater in pilot plant tests than at a bench scale.
Sulfation
Sulfation is a relatively slow reaction (several hours) compared with carbonation (<10 minutes); thus it is more likely that SO2 will come into contact with CaCO3 than CaO. However, both reactions are possible, and are shown below.
Indirect sulfation: CaO + SO2 + 1/2 O2 → CaSO4
Direct sulfation: CaCO3 + + 1/2 O2 → CaSO4 + CO2
Because calcium sulfate has a greater molar volume than either CaO or CaCO3 a sulfated layer will form on the outside of the particle, which can prevent the uptake of CO2 by the CaO further inside the particle. Furthermore, the temperature at which calcium sulfate dissociates to CaO and SO2 is relatively high, precluding sulfation's reversibility at the conditions present in CaL.
Technical implications
Calcium looping technology offers several technical advantages over amine scrubbing for carbon capture. Firstly, both carbonator and calciner can use fluidized bed technology, due to the good gas-solid contacting and uniform bed temperature. Fluidized bed technology has already been demonstrated at large scale: large (460MWe) atmospheric and pressurized systems exist, and there is not a need for intensive scaling up as there is for the solvent scrubbing towers used in amine scrubbing.
Also, the calcium looping process is energy efficient. The heat required for the endothermic calcination of CaCO3 and the heat required to raise the temperature of fresh limestone from ambient temperature, can be provided by in-situ oxy-fired combustion of fuel in the calciner. Although additional energy is required to separate O2 from N2, the majority of the energy input can be recovered because the carbonator reaction is exothermic and CO2 from the calciner can be used to power a steam cycle. A solid purge heat exchanger can also be utilized to recover energy from the deactivated CaO and coal ashes from the calciner. As a result, a relatively small efficiency penalty is imposed on the power process, where the efficiency penalty refers to the power losses for CO2 compression, air separation and steam generation. It is estimated at 6-8 % points, compared to 9.5-12.5 % from post combustion amine capture.
The main shortcoming of Ca-looping technology is the decreased reactivity of CaO through multiple calcination-carbonation cycles. This can be attributed to sintering and the permanent closure of small pores during carbonation.
Closure of small pores
The carbonation step is characterized by a fast initial reaction rate abruptly followed by a slow reaction rate (Figure 2). The carrying capacity of the sorbent is defined as the number of moles of CO2 reacted in the period of fast reaction rate with respect to that of the reaction stoichiometry for complete conversion of CaO to CaCO3. As seen in Figure 2, while mass after calcination remains constant, the mass change upon carbonation- the carrying capacity- reduces with a large number of cycles. In calcination, porous CaO (molar volume = ) is formed in place of CaCO3 (.). On the other hand, in carbonation, the CaCO3 formed on the surface of a CaO particle occupies a larger molar volume. As a result, once a layer of carbonate has formed on the surface (including on the large internal surface of porous CaO), it impedes further CO2 capture. This product layer grows over the pores and seals them off, forcing carbonation to follow a slower, diffusion dependent mechanism.
Sintering
CaO is also prone to sintering, or change in pore shape, shrinkage and grain growth during heating. Ionic compounds such as CaO mostly sinter because of volume diffusion or lattice diffusion mechanics. As described by sintering theory, vacancies generated by temperature sensitive defects direct void sites from smaller to larger ones, explaining the observed growth of large pores and the shrinkage of small pores in cycled limestone. It was found that sintering of CaO increases at higher temperatures and longer calcination durations, whereas carbonation time has minimal effect on particle sintering. A sharp increase in sintering of particles is observed at temperatures above 1173 K, causing a reduction in reactive surface area and a corresponding decrease in reactivity.
Solutions: Several options to reduce sorbent deactivation are currently being researched. An ideal sorbent would be mechanically strong, maintain its reactive surface through repeated cycles, and be reasonably inexpensive. Using thermally pre-activated particles or reactivating spent sorbents through hydration are two promising options. Thermally pre-activated particles have been found to retain activity for up to a thousand cycles. Similarly, particles reactivated by hydration show improved long term (after~20 cycles) conversions.
Disposal of waste sorbent
Properties of waste sorbent
After cycling several times and being removed from the calcium loop, the waste sorbent will have attrited, sulfated and become mixed with the ash from any fuel used. The amount of ash in the waste sorbent will depend on the fraction of the sorbent being removed and the ash and calorific content of the fuel. The size fraction of the sorbent is dependent on the original size fraction but also the number of cycles used and the type of limestone used.
Disposal routes
Proposed disposal routes of waste sorbent include:
Landfill;
Disposal at sea;
Use in cement manufacture;
Use in flue gas desulfurisation (FGD).
The lifecycle CO2 emissions for power generation with CaL and the first three disposal techniques have been calculated. Before disposal of the CaO coal power with CaL has a similar level of lifecycle emissions as amine scrubbing but with the CO2-absorbing properties of CaO CaL becomes significantly less polluting. Ocean disposal was found to be the best, but current laws relating to dumping waste at sea prevent this. Next best was use in cement manufacture, reducing emissions over an unabated coal plant by 93%.
Use in lime and cement manufacture
The manufacture of lime and cement is responsible for approximately 8% of the world's CO2 emissions. Around 65% of this CO2 comes from the calcination of calcium carbonate as shown earlier in this article, and the rest from fuel combustion. By replacing some or all of the calcium carbonate entering the plant with waste calcium oxide, the CO2 caused from calcination can be avoided, as well as some of the CO2 from fossil fuel combustion.
This calcium oxide could be sourced from other point sources of CO2 such as power stations, but most effort has been focussed on integrating calcium looping with Portland cement manufacture. By replacing the calciner in the cement plant with a calcium looping plant, it should be possible to capture 90% or more of the CO2 relatively inexpensively. There are alternative set-ups such as placing the calcium looping plant in the preheater section so as to make the plant as efficient as possible or to indirectly heat the calciner for increased energy efficiency.
Some work has been undertaken into whether calcium looping affects the quality of the Portland cement produced, but results so far seem to suggest that the production of strength-giving phases such as alite are similar for calcium looped and non-calcium looped cement.
Direct Separation Technology
Calix Ltd has developed a new type of kiln that enables the from the calcination process to be driven off as a pure stream. Calix achieves this by calcining finely ground CaCO3 continuously down vertical reactor tubes. The reactor tubes are heated from the outside using electricity or fuel ensuring the stream is pure and not contaminated with air or combustion products.
This technology has been successfully piloted in Europe by a cooperative industry group with support from the European Union as the Low Emission Intensity Lime And Cement (LEILAC1) reactor project. The study report concluded that the technology could capture C02 from full scale lime and cement kilns at €14 to €24/t. Transport and storage costs are not included in this estimate and will be dependent upon infrastructure available near the cement or lime plant
A FEED study is underway for a larger commercial demonstration kiln proposed for the Heidelberg Cement plant in Hannover (LEILAC2). This commercial demonstration kiln is designed to capture 100ktpa . Leilac-2 passed its Financial Investment Decision (FID) in March 2022, and it's Front End Engineering Design (FEED) Study Summary was completed and published on 13 October 2023, leading to a new and improved design and revised timeline. The next milestone is procurement of long lead items, currently underway (2023).
This type of kiln is also being studied as a potential method to decarbonise shipping through both looping and single use processes. The single use process would involve using CaCO3 to be sown over the ocean, thereby permanently capturing addition carbon from the ocean as the CaCO3 reacts to form Ca(HCO3)2 and reversing ocean acidification.
Economic implications
Calcium looping has several economic advantages.
Cost per metric ton for CO2 captured
Firstly, Ca-looping offers greater cost advantage compared to conventional amine-scrubbing technologies. The cost/metric ton for CO2 captured through Ca-looping is ~$23.70 whereas that for CO2 captured through amine scrubbing is about $35–$96. This can be attributed to the high availability and low cost of the CaO sorbent (derived from limestone) as compared to MEA. Also, Ca-looping imposes a lower energy penalty than amine scrubbing, resulting in lower energy costs. The amine scrubbing process is energy intensive, with approximately 67% of the operating costs going into steam requirements for solvent regeneration. A more detailed comparison of Ca-looping and amine scrubbing is shown below.
Cost of CO2 emissions avoided through Ca-looping
In addition, the cost of CO2 emissions avoided through Ca-looping is lower than the cost of emissions avoided via an oxyfuel combustion process (~US$23.8/t). This can be explained by the fact that, despite the capital costs incurred in constructing the carbonator for Ca-looping, CO2 will not only be captured from the oxy-fired combustion, but also from the main combustor (before the carbonator). The oxygen required in the calciners is only 1/3 that required for an oxyfuel process, lowering air separation unit capital costs and operating costs.
Sensitivity Analysis: Figure 3 shows how varying 8 separate parameters affects the cost/metric ton of CO2 captured through Ca-looping. It is evident that the dominant variables that affect cost are related to sorbent use, the Ca/C ratio and the CaO deactivation ratio. This is because the large sorbent quantities required dominate the economics of the capture process. Low costs of CO2 avoided for the indirectly heated Ca-looping process have been reported for integrated concepts in the lime production.
These variables should therefore be taken into account to achieve further cost reductions in the Ca-looping process. The cost of limestone is largely driven by market forces, and is outside the control of the plant. Currently, carbonators require a Ca/C ratio of 4 for effective CO2 capture. However, if the Ca/C ratio or CaO deactivation is reduced (i.e. the sorbent can be made to work more efficiently), the reduction in material consumption and waste can lower feedstock demand and operating costs.
Cement production
Finally, favorable economics can be achieved by using the purged material from the calcium looping cycle in cement production. The raw feed for cement production includes ~ 85 wt% limestone with the remaining material consisting of clay and additives (e.g. SiO2, Al2O3 etc.). The first step in the process involves calcinating limestone to produce CaO, which is then mixed with other materials in a kiln to produce clinker.
Using purged material from a Ca-looping system would reduce the raw material costs for cement production. Waste CaO and ash can be used in place of CaCO3 (the main constituent cement feed). The ash could also fulfill the aluminosilicate requirements otherwise supplied by additives. Since over 60% of the energy used in cement production goes into heat input for the precalciner, this integration with Ca-looping and the consequent reduced need for a calcination step, could lead to substantial energy savings (EU, 2001). However, there are problems with using the waste CaO in cement manufacture. If the technology is applied on a large scale, the purge rate of CaO should be optimized to minimize waste.
Political and environmental implications
To fully gauge the viability of calcium looping as a capture process, it is necessary to consider the political, environmental, and health effects of the process as well.
Political implications
Though many recent scientific reports (e.g.: the seven-wedge stabilization plan by Pacala and Socolow) convey an urgent need to deploy CCS, this urgency has not spread to the political establishment, mainly due to the high costs and energy penalty of CCS The economics of calcium looping are integral to its political viability. One economic and political advantage is the ability for Ca-looping to be retrofitted onto existing power plants, rather than requiring new plants to be built. The IEA sees power plants as an important target for carbon capture, and has set the goal to have all fossil fuel based power plants deploy CCS systems by 2040. However, power plants are expensive to build, and long lived. Retrofitting of post-combustion capture systems, such as Ca-looping, seems to be the only politically and economically viable way to achieve the IEA's goal.
A further political advantage is the potential synergy between calcium looping and cement production. An IEA report concludes that to meet emission reduction goals, there should be 450 CCS projects in India and China by 2050. However, this could be politically difficult, especially with these nations' numerous other development goals. After all, for a politician to commit money to CCS might be less advantageous than to commit it to job schemes or agricultural subsidies. Here, the integration of calcium looping with the prosperous and (particularly with infrastructure expansion in the developing world) vital cement industry might prove compelling to the political establishment.
This potential synergy with the cement industry also provides environmental benefits by simultaneously reducing the waste output of the looping process and decarbonizing cement production. Cement manufacture is energy and resource intensive, consuming 1.5 tonnes of material per tonne of cement produced. In the developing world, economic growth will drive infrastructure growth, increasing cement demand. Deploying a waste product for cement production could therefore have a large, positive environmental impact.
Environmental implications
The starting material for calcium looping is limestone, which is environmentally benign and widely available, accounting for over 10% (by volume) of all sedimentary rock. Limestone is already mined and cheaply obtainable. The mining process has no major known adverse environmental effects, beyond the unavoidable intrusiveness of any mining operation. However, as the following calculation shows, despite integration with cement industry, waste from Ca-looping can still be a problem.
From the environmental and health standpoint, Ca-looping compares favorably with amine scrubbing. Amine scrubbing is known to generate air pollutants, including amines and ammonia, which can react to form carcinogenic nitrosamines. Calcium looping, on the other hand, does not produce harmful pollutants. In addition, not only does it capture CO2, but it also removes the pollutant SO2 from the flue gas. This is both an advantage and disadvantage, as the air quality improves, but the captured SO2 has a detrimental effect on the cement that is generated from the calcium looping wastes.
Advantages and drawbacks
Advantages of the process
Calcium looping is considered as potential promising solutions to reduce CO2 capture energy penalty. There are many advantages from the calcium looping methods. Firstly, the method has been proved to yield a low efficiency penalties (5-8% points) while other mature CO2 capture systems yield a higher efficiency penalties (8–12.5%). Moreover, the method is well suited for a wide range of flue gases. Calcium looping is applicable for new builds and retrofits to existing power stations or other stationary industrial CO2 sources because the method can be implemented using large-scale circulating fluidized beds while other methods such as amine scrubbing is required a vastly upscale solvent scrubbing towers. In addition, crushed limestone used in calcium looping as the sorbent is a natural product, which is well distributed all over the world, non-hazardous and inexpensive. Many cement manufacturers or power plants located close to limestone sources could conceivably employ Calcium looping for CO2 capture. The waste sorbent can be used in cement manufacture.
Drawbacks
Apart from these advantages, there are several disadvantages needed to take into considerations. The plant integrating Ca-Looping might require a high construction investment because of the high thermal power of the post-combustion calcium loop. The sorbent capacity decreases significantly with the number of cycles for every carbonation-calcination cycle so the calcium-looping unit will require a constant flow of limestone. In order to increase the long-term reactivity of the sorbent or to reactivate the sorbent, some methods are under investigation such as thermal pretreatment, chemical doping and the production of artificial sorbents. The method applying the concept of fluidized bed reactor, but there are some problems causing the uncertainty for the process. Attrition of the limestone can be a problem during repeated cycling.
Benefits of calcium looping compared with other post-combustion capture processes
Calcium looping compares favorably with several post-combustion capture technologies. Amine scrubbing is the capture technology closest to being market-ready, and calcium looping has several marked benefits over it. When modeled on a 580 MW coal-fired power plant, Calcium looping experienced not only a smaller efficiency penalty (6.7-7.9% points compared to 9.5% for monoethanolamine and 9% for chilled ammonia) but also a less complex retrofitting process. Both technologies would require the plant to be retrofitted for adoption, but the calcium looping retrofitting process would result in twice the net power output of the scrubbing technology. Furthermore, this advantage can be compounded by introducing technology such as cryogenic O2 separation systems. This ups the efficiency of the calcium looping technology by increasing the energy density by 57.4%, making the already low energy penalties even less of an issue.
Calcium looping already has an energy advantage over amine scrubbing, but the main problem is that amine scrubbing is the more market-ready technology. However, the accompanying infrastructure for amine scrubbing include large solvent scrubbing towers, the likes of which have never been used on an industrial scale. The accompanying infrastructure for calcium looping capture technologies are circulating fluidized beds, which have already been implemented on an industrial scale. Although the individual technologies differ in terms of current technological viability, the fact that the infrastructure needed to properly implement an amine scrubbing system has yet to be developed keeps calcium looping competitive from a viability standpoint.
Sample evaluation
Assumptions
For a Ca-looping cycle installed on a 500 MW power plant, the purge rate is 12.6 kg CaO/s.
For the cement production process, 0.65 kg CaO is required/ kg cement produced.
U.S. electric generation capacity (only fossil fuels): Natural gas = 415 GW, Coal= 318 GW & Petroleum = 51 GW
Cement consumption in U.S. = 110,470 × 103 metric tons = 1.10470 × 108 metric tons = 1.10470 × 1011 kg.
Calculations
For a single Ca-looping cycle installed on a 500MW power plant:
Amount of CaO from purge annually = 12.6 kg CaO/s × 365 days/year × 24 hr/day × 3600s / hour = 3.97 × 108 kg CaO/ year
Cement that can be obtained from purge annually= 3.97 × 108 kg CaO/ year × 1 kg cement/ 0.65 kg CaO = 6.11 × 108 kg cement/year
Net electricity generation in US: (415 + 318 + 51) GW = 784 GW = 7.84 × 1011 W
Number of 500 MW power plants: 7.84 × 1011 W / 5.00 × 108 W = 1568 power plants
Amount of cement that can be produced from Ca-looping waste: 1568 × 6.11 × 108 kg cement/ year = 9.58 × 1011 kg cement/year
Production from Ca-looping waste as percent of total annual cement consumption = [(9.58 × 1011 kg)/ (1.10470 × 1011 kg)] × 100 = 870%
Therefore, amount of cement production from Ca-looping waste of all fossil fuel based electric power plants in US will be far greater than net consumption. To make Ca-looping more viable, waste must be minimized (i.e. sorbent degradation reduced) to ideally about 1/10th of current levels.
References
Carbon capture and storage
Chemical engineering
Chemical processes | Calcium looping | [
"Chemistry",
"Engineering"
] | 6,125 | [
"Chemical engineering",
"Geoengineering",
"Chemical processes",
"nan",
"Chemical process engineering",
"Carbon capture and storage"
] |
42,270,740 | https://en.wikipedia.org/wiki/Vicenistatin | Vicenistatin is a macrolactam antibiotic synthesized by Streptomyces halstedii HC34. It was originally isolated from this bacterium in 1993. It includes the unusual starter unit methylaspartate.
References
Macrolide antibiotics | Vicenistatin | [
"Chemistry"
] | 51 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
49,886,880 | https://en.wikipedia.org/wiki/Sakata%20model | In particle physics, the Sakata model of hadrons was a precursor to the quark model. It proposed that the proton, neutron, and Lambda baryon were elementary particles (sometimes referred to as sakatons), and that all other known hadrons were made of them. The model was proposed by Shoichi Sakata in 1956. The model was successful in explaining many features of hadrons, but was supplanted by the quark model as the understanding of hadrons progressed.
Overview
The success of the Sakata model is due to the fact that there is a correspondence between the proton, neutron, and Lambda baryon, and the up, down, and strange quarks. The proton contains two up quarks and a down quark, the neutron contains one up quark and two down quarks, while the Lambda baryon contains one up quark, one down quark, and one strange quark. That is, each of these baryons is made of one up and one down quark, and an additional quark: up for the proton, down for the neutron, and strange for the Lambda baryon. Because of this correspondence to the up, down, and strange quarks, the Sakata model has the same SU(3) symmetry as the quark model, and can reproduce the flavour quantum numbers of all hadrons made of up, down and strange quarks. Because the charm quark was not discovered until 1974, the Sakata model remained a staple of particle physics for some time after the quark model had been proposed.
See also
Eightfold Way
References
Hadrons | Sakata model | [
"Physics"
] | 334 | [
"Hadrons",
"Subatomic particles",
"Matter"
] |
49,895,047 | https://en.wikipedia.org/wiki/Amplitude%20integrated%20electroencephalography | Amplitude integrated electroencephalography (aEEG), cerebral function monitoring (CFM) or continuous electroencephalogram (CEEG) is a technique for monitoring brain function in intensive care settings over longer periods of time than the traditional electroencephalogram (EEG), typically hours to days. By placing electrodes on the scalp of the patient, a trace of electrical activity is produced which is then displayed on a semilogarithmic graph of peak-to-peak amplitude over time; amplitude is logarithmic and time is linear. In this way, trends in electrical activity in the cerebral cortex can be interpreted to inform on events such as seizures or suppressed brain activity. aEEG is useful especially in neonatology where it can be used to aid in diagnosis of hypoxic ischemic encephalopathy (HIE), and to monitor and diagnose seizure activity.
Interpretation of the aEEG
The CFM readout offers an integrated trace in one pane and a non-integrated trace in another pane (see image). Modern machines give a readout for each hemisphere corresponding to the positions of electrodes placed on the patient's head. The characteristics of the CFM include the 'baseline' which should be more than 5 μV, the upper limit of the trace which should be more than 10 μV, and the presence of 'sleep wake cycling' whereby the trace is expected to narrow and broaden over time. Seizures appear on the trace as regions of high activity with a raised and compacted trace in the aEEG pane; this would correspond to high-amplitude, repetitive waveforms in the non-integrated pane. A low-amplitude or 'suppressed' trace is prognostically concerning as it indicates abnormally low brain activity. A further possible pattern is a 'burst suppression' trace which consists of a low-amplitude signal interspersed with periods of high activity on the aEEG readout. This also carries a poor prognosis.
See also
Electroencephalogram(EEG)
Bispectral index
Epileptic seizure
Hypoxic ischaemic encephalopathy
Therapeutic hypothermia
Pressure reactivity index
Neonatal seizure
References
Electrophysiology
Neurophysiology
Neurotechnology
Electrodiagnosis
Mathematics in medicine | Amplitude integrated electroencephalography | [
"Mathematics"
] | 476 | [
"Applied mathematics",
"Mathematics in medicine"
] |
49,905,070 | https://en.wikipedia.org/wiki/Radioactive%20source | A radioactive source is a known quantity of a radionuclide which emits ionizing radiation, typically one or more of the radiation types gamma rays, alpha particles, beta particles, and neutron radiation.
Sources can be used for irradiation, where the radiation performs a significant ionising function on a target material, or as a radiation metrology source, which is used for the calibration of radiometric process and radiation protection instrumentation. They are also used for industrial process measurements, such as thickness gauging in the paper and steel industries. Sources can be sealed in a container (highly penetrating radiation) or deposited on a surface (weakly penetrating radiation), or they can be in a fluid.
As an irradiation source they are used in medicine for radiation therapy and in industry for such as industrial radiography, food irradiation, sterilization, vermin disinfestation, and irradiation crosslinking of PVC.
Radionuclides are chosen according to the type and character of the radiation they emit, intensity of emission, and the half-life of their decay. Common source radionuclides include cobalt-60, iridium-192, and strontium-90. The SI measurement quantity of source activity is the Becquerel, though the historical unit Curies is still in partial use, such as in the US, despite their NIST strongly advising the use of the SI unit. The SI unit for health purposes is mandatory in the EU.
An irradiation source typically lasts for between 5 and 15 years before its activity drops below useful levels. However sources with long half-life radionuclides when utilised as calibration sources can be used for much longer.
Sealed sources
Many radioactive sources are sealed, meaning they are permanently either completely contained in a capsule or firmly bonded solid to a surface. Capsules are usually made of stainless steel, titanium, platinum or another inert metal. The use of sealed sources removes almost all risk of dispersion of radioactive material into the environment due to mishandling, but the container is not intended to attenuate radiation, so further shielding is required for radiation protection. Sealed sources are used in almost all applications where the source does not need to be chemically or physically included in a liquid or gas.
Categorisation of sealed sources
Sealed sources are categorised by the IAEA according to their activity in relation to a minimum dangerous source (where a dangerous source is one that could cause significant injury to humans). The ratio used is A/D, where A is the activity of the source and D is the minimum dangerous activity.
Note that sources with sufficiently low radioactive output (such as those used in Smoke detectors) as to not cause harm to humans are not categorised.
Calibration sources
Calibration sources are used primarily for the calibration of radiometric instrumentation, which is used on process monitoring or in radiological protection.
Capsule sources, where the radiation effectively emits from a point, are used for beta, gamma and X-ray instrument calibration. High level sources are normally used in a calibration cell: a room with thick walls to protect the operator and the provision of remote operation of the source exposure.
The plate source is in common use for the calibration of radioactive contamination instruments. This has a known amount of radioactive material fixed to its surface, such as an alpha and/or beta emitter, to allow the calibration of large area radiation detectors used for contamination surveys and personnel monitoring. Such measurements are typically counts per unit time received by the detector, such as counts per minute or counts per second.
Unlike the capsule source, the plate source emitting material must be on the surface to prevent attenuation by a container or self-shielding due to the material itself. This is particularly important with alpha particles which are easily stopped by a small mass. The Bragg curve shows the attenuation effect in free air.
Unsealed sources
Unsealed sources are sources that are not in a permanently sealed container, and are used extensively for medical purposes. They are used when the source needs to be dissolved in a liquid for injection into a patient or ingestion by the patient. Unsealed sources are also used in industry in a similar manner for leak detection as a Radioactive tracer.
Disposal
Disposal of expired radioactive sources presents similar challenges to the disposal of other nuclear waste, although to a lesser degree. Spent low level sources will sometimes be sufficiently inactive that they are suitable for disposal via normal waste disposal methods — usually landfill. Other disposal methods are similar to those for higher-level radioactive waste, using various depths of borehole depending on the activity of the waste.
A notorious incident of neglect in disposing of a high level source was the Goiânia accident, which resulted in several fatalities. The Tammiku radioactive material theft involved the accidental theft of caesium-137 material in Tammiku, Estonia, in 1994.
See also
Common beta emitters
Commonly used gamma-emitting isotopes
Geiger counter
Ionizing radiation
Neutron source
References
Nuclear materials
Radioactivity | Radioactive source | [
"Physics",
"Chemistry"
] | 1,044 | [
"Matter",
"Materials",
"Nuclear materials",
"Nuclear physics",
"Radioactivity"
] |
49,905,706 | https://en.wikipedia.org/wiki/Moduli%20stack%20of%20elliptic%20curves | In mathematics, the moduli stack of elliptic curves, denoted as or , is an algebraic stack over classifying elliptic curves. Note that it is a special case of the moduli stack of algebraic curves . In particular its points with values in some field correspond to elliptic curves over the field, and more generally morphisms from a scheme to it correspond to elliptic curves over . The construction of this space spans over a century because of the various generalizations of elliptic curves as the field has developed. All of these generalizations are contained in .
Properties
Smooth Deligne-Mumford stack
The moduli stack of elliptic curves is a smooth separated Deligne–Mumford stack of finite type over , but is not a scheme as elliptic curves have non-trivial automorphisms.
j-invariant
There is a proper morphism of to the affine line, the coarse moduli space of elliptic curves, given by the j-invariant of an elliptic curve.
Construction over the complex numbers
It is a classical observation that every elliptic curve over is classified by its periods. Given a basis for its integral homology and a global holomorphic differential form (which exists since it is smooth and the dimension of the space of such differentials is equal to the genus, 1), the integralsgive the generators for a -lattice of rank 2 inside of pg 158. Conversely, given an integral lattice of rank inside of , there is an embedding of the complex torus into from the Weierstrass P function pg 165. This isomorphic correspondence is given byand holds up to homothety of the lattice , which is the equivalence relationIt is standard to then write the lattice in the form for , an element of the upper half-plane, since the lattice could be multiplied by , and both generate the same sublattice. Then, the upper half-plane gives a parameter space of all elliptic curves over . There is an additional equivalence of curves given by the action of thewhere an elliptic curve defined by the lattice is isomorphic to curves defined by the lattice given by the modular actionThen, the moduli stack of elliptic curves over is given by the stack quotientNote some authors construct this moduli space by instead using the action of the Modular group . In this case, the points in having only trivial stabilizers are dense.
Stacky/Orbifold points
Generically, the points in are isomorphic to the classifying stack since every elliptic curve corresponds to a double cover of , so the -action on the point corresponds to the involution of these two branches of the covering. There are a few special points pg 10-11 corresponding to elliptic curves with -invariant equal to and where the automorphism groups are of order 4, 6, respectively pg 170. One point in the Fundamental domain with stabilizer of order corresponds to , and the points corresponding to the stabilizer of order correspond to pg 78.
Representing involutions of plane curves
Given a plane curve by its Weierstrass equationand a solution , generically for j-invariant , there is the -involution sending . In the special case of a curve with complex multiplicationthere the -involution sending . The other special case is when , so a curve of the form there is the -involution sending where is the third root of unity .
Fundamental domain and visualization
There is a subset of the upper-half plane called the Fundamental domain which contains every isomorphism class of elliptic curves. It is the subsetIt is useful to consider this space because it helps visualize the stack . From the quotient mapthe image of is surjective and its interior is injectivepg 78. Also, the points on the boundary can be identified with their mirror image under the involution sending , so can be visualized as the projective curve with a point removed at infinitypg 52.
Line bundles and modular functions
There are line bundles over the moduli stack whose sections correspond to modular functions on the upper-half plane . On there are -actions compatible with the action on given byThe degree action is given byhence the trivial line bundle with the degree action descends to a unique line bundle denoted . Notice the action on the factor is a representation of on hence such representations can be tensored together, showing . The sections of are then functions sections compatible with the action of , or equivalently, functions such that This is exactly the condition for a holomorphic function to be modular.
Modular forms
The modular forms are the modular functions which can be extended to the compactificationthis is because in order to compactify the stack , a point at infinity must be added, which is done through a gluing process by gluing the -disk (where a modular function has its -expansion)pgs 29-33.
Universal curves
Constructing the universal curves is a two step process: (1) construct a versal curve and then (2) show this behaves well with respect to the -action on . Combining these two actions together yields the quotient stack
Versal curve
Every rank 2 -lattice in induces a canonical -action on . As before, since every lattice is homothetic to a lattice of the form then the action sends a point toBecause the in can vary in this action, there is an induced -action on giving the quotient spaceby projecting onto .
SL2-action on Z2
There is a -action on which is compatible with the action on , meaning given a point and a , the new lattice and an induced action from , which behaves as expected. This action is given bywhich is matrix multiplication on the right, so
See also
Fundamental domain
Homothety
Level structure (algebraic geometry)
Moduli of abelian varieties
Shimura variety
Modular curve
Elliptic cohomology
References
External links
Algebraic geometry | Moduli stack of elliptic curves | [
"Mathematics"
] | 1,199 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
46,872,870 | https://en.wikipedia.org/wiki/Complex%20spacetime | Complex spacetime is a mathematical framework that combines the concepts of complex numbers and spacetime in physics. In this framework, the usual real-valued coordinates of spacetime are replaced with complex-valued coordinates. This allows for the inclusion of imaginary components in the description of spacetime, which can have interesting implications in certain areas of physics, such as quantum field theory and string theory.
The notion is entirely mathematical with no physics implied, but should be seen as a tool, for instance, as exemplified by the Wick rotation.
Real and complex spaces
Mathematics
The complexification of a real vector space results in a complex vector space (over the complex number field). To "complexify" a space means extending ordinary scalar multiplication of vectors by real numbers to scalar multiplication by complex numbers. For complexified inner product spaces, the complex inner product on vectors replaces the ordinary real-valued inner product, an example of the latter being the dot product.
In mathematical physics, when we complexify a real coordinate space we create a complex coordinate space , referred to in differential geometry as a "complex manifold". The space can be related to , since every complex number constitutes two real numbers.
A complex spacetime geometry refers to the metric tensor being complex, not spacetime itself.
Physics
The Minkowski space of special relativity (SR) and general relativity (GR) is a 4 dimensional pseudo-Euclidean space. The spacetime underlying Albert Einstein's field equations, which mathematically describe gravitation, is a real 4 dimensional pseudo-Riemannian manifold.
In quantum mechanics, wave functions describing particles are complex-valued functions of real space and time variables. The set of all wavefunctions for a given system is an infinite-dimensional complex Hilbert space.
History
The notion of spacetime having more than four dimensions is of interest in its own mathematical right. Its appearance in physics can be rooted to attempts of unifying the fundamental interactions, originally gravity and electromagnetism. These ideas prevail in string theory and beyond. The idea of complex spacetime has received considerably less attention, but it has been considered in conjunction with the Lorentz–Dirac equation and the Maxwell equations. Other ideas include mapping real spacetime into a complex representation space of , see twistor theory.
In 1919, Theodor Kaluza posted his 5-dimensional extension of general relativity to Albert Einstein, who was impressed with how the equations of electromagnetism emerged from Kaluza's theory. In 1926, Oskar Klein suggested that Kaluza's extra dimension might be "curled up" into an extremely small circle, as if a circular topology is hidden within every point in space. Instead of being another spatial dimension, the extra dimension could be thought of as an angle, which created a hyper-dimension as it spun through 360°. This 5d theory is named Kaluza–Klein theory.
In 1932, Hsin P. Soh of MIT, advised by Arthur Eddington, published a theory attempting to unify gravitation and electromagnetism within a complex 4-dimensional Riemannian geometry. The line element ds2 is complex-valued, so that the real part corresponds to mass and gravitation, while the imaginary part with charge and electromagnetism. The usual space x, y, z and time t coordinates themselves are real and spacetime is not complex, but tangent spaces are allowed to be.
For several decades after Einstein published his general theory of relativity in 1915, he tried to unify gravity with electromagnetism to create a unified field theory explaining both interactions. In the latter years of World War II, Einstein began considering complex spacetime geometries of various kinds.
In 1953, Wolfgang Pauli generalised the Kaluza–Klein theory to a six-dimensional space, and (using dimensional reduction) derived the essentials of an gauge theory (applied in quantum mechanics to the electroweak interaction), as if Klein's "curled up" circle had become the surface of an infinitesimal hypersphere.
In 1975, Jerzy Plebanski published "Some Solutions of Complex Albert Einstein Equations".
There have been attempts to formulate the Abraham–Lorentz force in complex spacetime by analytic continuation.
See also
Construction of a complex null tetrad
Four-vector
Hilbert space
Twistor space
Spherical basis
Riemann–Silberstein vector
References
Further reading
Spacetime
Theory of relativity | Complex spacetime | [
"Physics",
"Mathematics"
] | 899 | [
"Spacetime",
"Vector spaces",
"Space (mathematics)",
"Theory of relativity"
] |
46,877,391 | https://en.wikipedia.org/wiki/Precession%20electron%20diffraction | Precession electron diffraction (PED) is a specialized method to collect electron diffraction patterns in a transmission electron microscope (TEM). By rotating (precessing) a tilted incident electron beam around the central axis of the microscope, a PED pattern is formed by integration over a collection of diffraction conditions. This produces a quasi-kinematical diffraction pattern that is more suitable as input into direct methods algorithms to determine the crystal structure of the sample.
Overview
Geometry
Precession electron diffraction is accomplished utilizing the standard instrument configuration of a modern TEM. The animation illustrates the geometry used to generate a PED pattern. Specifically, the beam tilt coils located pre-specimen are used to tilt the electron beam off of the optic axis so it is incident with the specimen at an angle, φ. The image shift coils post-specimen are then used to tilt the diffracted beams back in a complementary manner such that the direct beam falls in the center of the diffraction pattern. Finally, the beam is precessed around the optic axis while the diffraction pattern is collected over multiple revolutions.
The result of this process is a diffraction pattern that consists of a summation or integration over the patterns generated during precession. While the geometry of this pattern matches the pattern associated with a normally incident beam, the intensities of the various reflections approximate those of the kinematical pattern much more closely. At any moment in time during precession, the diffraction pattern consists of a Laue circle with a radius equal to the precession angle, φ. These snapshots contain far fewer strongly excited reflections than a normal zone axis pattern and extend farther into reciprocal space. Thus, the composite pattern will display far less dynamical character, and will be well suited for use as input into direct methods calculations.
Advantages
PED possesses many advantageous attributes that make it well suited to investigating crystal structures via direct methods approaches:
Quasi-kinematical diffraction patterns: While the underlying physics of the electron diffraction is still dynamical in nature, the conditions used to collect PED patterns minimize many of these effects. The scan/de-scan procedure reduces ion channeling because the pattern is generated off of the zone axis. Integration via precession of the beam minimizes the effect of non-systematic inelastic scattering, such as Kikuchi lines. Few reflections are strongly excited at any moment during precession, and those that are excited are generally much closer to a two-beam condition (dynamically coupled only to the forward-scattered beam). Furthermore, for large precession angles, the radius of the excited Laue circle becomes quite large. These contributions combine such that the overall integrated diffraction pattern resembles the kinematical pattern much more closely than a single zone axis pattern.
Broader range of measured reflections: The Laue circle (see Ewald sphere) that is excited at any given moment during precession extends farther into reciprocal space. After integration over multiple precessions, many more reflections in the zeroeth order Laue zone (ZOLZ) are present, and as stated previously, their relative intensities are much more kinematical. This provides considerably more information to input into direct methods calculations, improving the accuracy of phase determination algorithms. Similarly, more higher order Laue zone (HOLZ) reflections are present in the pattern, which can provide more complete information about the three-dimensional nature of reciprocal space, even in a single two-dimensional PED pattern.
Practical robustness: PED is less sensitive to small experimental variations than other electron diffraction techniques. Since the measurement is an average over many incident beam directions, the pattern is less sensitive to slight misorientation of the zone axis from the optic axis of the microscope, and resulting PED patterns will generally still display the zone axis symmetry. The patterns obtained are also less sensitive to the thickness of the sample, a parameter with strong influence in standard electron diffraction patterns.
Very small probe size: Because x-rays interact so weakly with matter, there is a minimum size limit of approximately 5 μm for single crystals that can be examined via x-ray diffraction methods. In contrast, electrons can be used to probe much smaller nano-crystals in a TEM. In PED, the probe size is limited by the lens aberrations and sample thickness. With a typical value for spherical aberration, the minimum probe size is usually around 50 nm. However, with Cs corrected microscopes, the probe can be made much smaller.
Practical considerations
Precession electron diffraction is typically conducted using accelerating voltages between 100-400 kV. Patterns can be formed under parallel or convergent beam conditions. Most modern TEMs can achieve a tilt angle, φ, ranging from 0-3°. Precession frequencies can be varied from Hz to kHz, but in standard cases 60 Hz has been used. In choosing a precession rate, it is important to ensure that many revolutions of the beam occur over the relevant exposure time used to record the diffraction pattern. This ensures adequate averaging over the excitation error of each reflection. Beam sensitive samples may dictate shorter exposure times and thus, motivate the use of higher precession frequencies.
One of the most significant parameters affecting the diffraction pattern obtained is the precession angle, φ. In general, larger precession angles result in more kinematical diffraction patterns, but both the capabilities of the beam tilt coils in the microscope and the requirements on the probe size limit how large this angle can become in practice. Because PED takes the beam off of the optic axis by design, it accentuates the effect of the spherical aberrations within the probe forming lens. For a given spherical aberration, Cs, the probe diameter, d, varies with convergence angle, α, and precession angle, φ, as
Thus, if the specimen of interest is quite small, the maximum precession angle will be restrained. This is most significant for conditions of convergent beam illumination. 50 nm is a general lower limit on probe size for standard TEMs operating at high precession angles (>30 mrad), but can be surpassed in Cs corrected instruments. In principle the minimum precessed probe can reach approximately the full-width-half-max (FWHM) of the converged un-precessed probe in any instrument, however in practice the effective precessed probe is typically ~10-50x larger due to uncontrolled aberrations present at high angles of tilt. For example, a 2 nm precessed probe with >40 mrad precession angle was demonstrated in an aberration-corrected Nion UltraSTEM with native sub-Å probe (aberrations corrected to ~35 mrad half-angle).
If the precession angle is made too large, further complications due to the overlap of the ZOLZ and HOLZ reflections in the projected pattern can occur. This complicates the indexing of the diffraction pattern and can corrupt the measured intensities of reflections near the overlap region, thereby reducing the effectiveness of the collected pattern for direct methods calculations.
Theoretical considerations
For an introduction to the theory of electron diffraction, see part 2 of Williams and Carter's Transmission Electron Microscopy text
While it is clear that precession reduces many of the dynamical diffraction effects that plague other forms of electron diffraction, the resulting patterns cannot be considered purely kinematical in general. There are models that attempt to introduce corrections to convert measured PED patterns into true kinematical patterns that can be used for more accurate direct methods calculations, with varying degrees of success. Here, the most basic corrections are discussed. In purely kinematical diffraction, the intensities of various reflections, , are related to the square of the amplitude of the structure factor, by the equation:
This relationship is generally far from accurate for experimental dynamical electron diffraction and when many reflections have a large excitation error. First, a Lorentz correction analogous to that used in x-ray diffraction can be applied to account for the fact that reflections are infrequently exactly at the Bragg condition over the course of a PED measurement. This geometrical correction factor can be shown to assume the approximate form:
where g is the reciprocal space magnitude of the reflection in question and Ro is the radius of the Laue circle, usually taken to be equal to φ. While this correction accounts for the integration over the excitation error, it takes no account for the dynamical effects that are ever-present in electron diffraction. This has been accounted for using a two-beam correction following the form of the Blackman correction originally developed for powder x-ray diffraction. Combining this with the aforementioned Lorentz correction yields:
where , is the sample thickness, and is the wave-vector of the electron beam. is the Bessel function of zeroeth order.
This form seeks to correct for both geometric and dynamical effects, but is still only an approximation that often fails to significantly improve the kinematic quality of the diffraction pattern (sometimes even worsening it). More complete and accurate treatments of these theoretical correction factors have been shown to adjust measured intensities into better agreement with kinematical patterns. For details, see Chapter 4 of reference.
Only by considering the full dynamical model through multislice calculations can the diffraction patterns generated by PED be simulated. However, this requires the crystal potential to be known, and thus is most valuable in refining the crystal potentials suggested through direct methods approaches. The theory of precession electron diffraction is still an active area of research, and efforts to improve on the ability to correct measured intensities without a priori knowledge are ongoing.
Historical development
The first precession electron diffraction system was developed by Vincent and Midgley in Bristol, UK and published in 1994. Preliminary investigation into the Er2Ge2O7 crystal structure demonstrated the feasibility of the technique at reducing dynamical effects and providing quasi-kinematical patterns that could be solved through direct methods to determine crystal structure. Over the next ten years, a number of university groups developed their own precession systems and verified the technique by solving complex crystal structures, including the groups of J. Gjønnes (Oslo), Migliori (Bologna), and L. Marks (Northwestern).
In 2004, NanoMEGAS developed the first commercial precession system capable of being retrofit to any modern TEM. This hardware solution enabled more widespread implementation of the technique and spurred its more mainstream adoption into the crystallography community. Software methods have also been developed to achieve the necessary scanning and descanning using the built-in electronics of the TEM. HREM Research Inc has developed the QED plug-in for the DigitalMicrograph software. This plug-in enables the widely used software package to collect precession electron diffraction patterns without additional modifications to the microscope.
According to NanoMEGAS, as of June, 2015, more than 200 publications have relied on the technique to solve or corroborate crystal structures; many on materials that could not be solved by other conventional crystallography techniques like x-ray diffraction. Their retrofit hardware system is used in more than 75 laboratories across the world.
Applications
Crystallography
The primary goal of crystallography is to determine the three dimensional arrangement of atoms in a crystalline material. While historically, x-ray crystallography has been the predominant experimental method used to solve crystal structures ab initio, the advantages of precession electron diffraction make it one of the preferred methods of electron crystallography.
Symmetry determination
The symmetry of a crystalline material has profound impacts on its emergent properties, including electronic band structure, electromagnetic behavior, and mechanical properties . Crystal symmetry is described and categorized by the crystal system, lattice, and space group of the material. Determination of these attributes is an important aspect of crystallography.
Precession electron diffraction enables much more direct determination of space group symmetries over other forms of electron diffraction. Because of the increased number of reflections in both the zero order Laue zone and higher order Laue zones, the geometric relationship between Laue zones is more readily determined. This provides three-dimensional information about the crystal structure that can be used to determine its space group. Furthermore, because the PED technique is insensitive to slight misorientation from the zone axis, it provides the practical benefit of more robust data collection.
Direct methods
Direct methods in crystallography are a collection of mathematical techniques that seek to determine crystal structure based on measurements of diffraction patterns and potentially other a priori knowledge (constraints). The central challenge of inverting measured diffraction intensities (i.e. applying an inverse Fourier Transform) to determine the original crystal potential is that phase information is lost in general since intensity is a measurement of the square of the modulus of the amplitude of any given diffracted beam. This is known as the phase problem of crystallography.
If the diffraction can be considered kinematical, constraints may be used to probabilistically relate the phases of the reflections to their amplitudes, and the original structure can be solved via direct methods (see Sayre equation as an example). Kinematical diffraction is often the case in x-ray diffraction, and is one of the primary reasons that technique has been so successful at solving crystal structures. However, in electron diffraction, the probing wave interacts much more strongly with the electrostatic crystal potential, and complex dynamical diffraction effects can dominate the measured diffraction patterns. This makes application of direct methods much more challenging without a priori knowledge of the structure in question.
Ab Initio structure determination
Diffraction patterns collected through PED often agree well-enough with the kinematical pattern to serve as input data for direct methods calculations. A three-dimensional set of intensities mapped over the reciprocal lattice can be generated by collecting diffraction patterns over multiple zone axes. Applying direct methods to this data set will then yield probable crystal structures. Coupling direct methods results with simulations (e.g. multislice) and iteratively refining the solution can lead to the ab initio determination of the crystal structure.
The PED technique has been used to determine the crystal structure of many classes of materials. Initial investigations during the emergence of the technique focused on complex oxides and nano-precipitates in Aluminum alloys that could not be resolved using x-ray diffraction. Since becoming a more widespread crystallographic technique, many more complex metal oxide structures have been solved.
Zeolites are a technologically valuable class of materials that have historically been difficult to solve using x-ray diffraction due to the large unit cells that typically occur. PED has been demonstrated to be a viable alternative to solving many of these structures, including the ZSM-10, MCM-68, and many of the ITQ-n class of zeolite structures.
PED also enables the use of electron diffraction to investigate beam-sensitive organic materials. Because PED can reproduce symmetric zone axis diffraction patterns even when the zone axis is not perfectly aligned, it enables information to be extracted from sensitive samples without risking overexposure during a time-intensive orientation of the sample.
Automated diffraction tomography
Automated diffraction tomography (ADT) uses software to collect diffraction patterns over a series of slight tilt increments. In this way, a three-dimensional (tomographic) data set of reciprocal lattice intensities can be generated and used for structure determination. By coupling this technique with PED, the range and quality of the data set can be improved. The combination of ADT-PED has been employed effectively to investigate complex framework structures and beam-sensitive organic crystals
Orientation mapping
Mapping the relative orientation of crystalline grains and/or phases helps understand material texture at the micro and nano scales. In a transmission electron microscope, this is accomplished by recording a diffraction pattern at a large number of points (pixels) over a region of the crystalline specimen. By comparing the recorded patterns to a database of known patterns (either previously indexed experimental patterns or simulated patterns), the relative orientation of grains in the field of view can be determined.
Because this process is highly automated, the quality of the recorded diffraction patterns is crucial to the software's ability to accurately compare and assign orientations to each pixel. Thus, the advantages of PED are well-suited for use with this scanning technique. By instead recording a PED pattern at each pixel, dynamical effects are reduced, and the patterns are more easily compared to simulated data, improving the accuracy of the automated phase/orientation assignment.
Beyond diffraction
Although the PED technique was initially developed for its improved diffraction applications, the advantageous properties of the technique have been found to enhance many other investigative techniques in the TEM. These include bright field and dark field imaging, electron tomography, and composition-probing techniques like energy-dispersive x-ray spectroscopy (EDS) and electron energy loss spectroscopy (EELS).
Imaging
Though many people conceptualize images and diffraction patterns separately, they contain principally the same information. In the simplest approximation, the two are simply Fourier transforms of one another. Thus, the effects of beam precession on diffraction patterns also have significant effects on the corresponding images in the TEM. Specifically, the reduced dynamical intensity transfer between beams that is associated with PED results in reduced dynamical contrast in images collected during precession of the beam. This includes a reduction in thickness fringes, bend contours, and strain fields. While these features can often provide useful information, their suppression enables a more straightforward interpretation of diffraction contrast and mass contrast in images.
Tomography
In an extension of the application of PED to imaging, electron tomography can benefit from the reduction of dynamic contrast effects. Tomography entails collecting a series of images (2-D projections) at various tilt angles and combining them to reconstruct the three dimensional structure of the specimen. Because many dynamical contrast effects are highly sensitive to the orientation of the crystalline sample with respect to the incident beam, these effects can convolute the reconstruction process in tomography. Similarly to single imaging applications, by reducing dynamical contrast, interpretation of the 2-D projections and thus the 3-D reconstruction are more straightforward.
Investigating composition
Energy-dispersive x-ray spectroscopy (EDS) and electron energy loss spectroscopy (EELS) are commonly used techniques to both qualitatively and quantitatively probe the composition of samples in the TEM. A primary challenge in the quantitative accuracy of both techniques is the phenomenon of channelling. Put simply, in a crystalline solid, the probability of interaction between an electron and ion in the lattice depends strongly on the momentum (direction and velocity) of the electron. When probing a sample under diffraction conditions near a zone axis, as is often the case in EDS and EELS applications, channelling can have a large impact on the effective interaction of the incident electrons with specific ions in the crystal structure. In practice, this can lead to erroneous measurements of composition that depend strongly on the orientation and thickness of the sample and the accelerating voltage. Since PED entails an integration over incident directions of the electron probe, and generally does not include beams parallel to the zone axis, the detrimental channeling effects outlined above can be minimized, yielding far more accurate composition measurements in both techniques.
References
External links
NanoMEGAS
System Design and Verification of the Precession Electron Diffraction Technique, Ph.D. Thesis, C.S. Own
Diffraction
Crystallography | Precession electron diffraction | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,105 | [
"Spectrum (physical sciences)",
"Materials science",
"Crystallography",
"Diffraction",
"Condensed matter physics",
"Spectroscopy"
] |
32,450,455 | https://en.wikipedia.org/wiki/Teva%20Active%20Pharmaceutical%20Ingredients | Teva API is an international pharmaceutical company headquartered in Israel. Teva API is a stand-alone business unit of Teva Pharmaceutical Industries limited, the largest generic drug manufacturer in the world and one of the 15 largest pharmaceutical companies worldwide.
On top of supplying a major share of Teva's own needs, the Teva API division is an active competitor in world markets, investing both in the development of new products, manufacturing processes and in the upgrading of production facilities. In 2014, Teva API's sales to third parties totaled $724 million. In recent years growth occurred in all of Teva API's principal geographical markets: North America, Europe and International.
History
At the heart of the API division is the Israel-based Teva-Tech, formerly known as the Asia and Plantex plants which manufacture, develop and market raw materials for pharmaceuticals.
Acquisitions
Teva api has grown by acquiring manufacturing and development facilities around the world. Today, Teva api operates 21 plants and sales offices worldwide. Major Teva api's acquisitions:
2011 - Teva Acquisition of Theramex in Monaco
Teva api 2008 Acquisitions
December 2008 – Teva Acquisition of Barr-Pliva in Croatia
July 2008 – Teva Acquisition of Bentley in Spain
April 2008 – Teva api Acquisition of Archimica in Puerto Rico
Acquisitions & Foundations since the 1990s
2006 – Acquisition of Wanma in China
2006 – Acquisition of Ivax Corporation
2004 – Acquisition of Sicor API in Italy, Mexico and Switzerland
2003 – Acquisition of RDL in India
2002 – Acquisition of PFC in Italy
1996 – Acquisition of BIOCRAFT in Missouri (US)
1995 – Foundation of TEVA-TECH in Israel
1995 – Acquisition of ICI in Italy & BIOGAL in Hungary
1991 – Acquisition of PROSINTEX in Italy
Facilities
Research and development
The R&D group at teva api consists of a team of over 760 top scientists located in 7 development centers worldwide: A large center in Israel (synthetic products and peptides), a large center in Hungary (fermentation and semi-synthetic products), and a facility in India and additional sites in Italy, Croatia, Mexico and the Czech Republic (development of high potency API).
teva api's R&D focuses on the development of processes for the manufacturing of API, including intermediates, chemical and biological (fermentation), which are of interest to the generic drug industry, as well as Teva's proprietary drugs. The API R&D division also seeks methods to continuously reduce API production costs, enabling teva api to improve its cost structure.
Manufacturing and technology
Teva has 15 API production facilities located in Israel, Hungary, Italy, the U.S., the Czech Republic, India, Mexico, Puerto Rico, Monaco, China and Croatia. TAPI's holds expertise in a variety of production technologies, including chemical synthesis, semi-synthetic fermentation, enzymatic synthesis, high potent manufacturing, plant extract technology, synthetic peptides, vitamin D derivatives and prostaglandins. Also, its advanced technology and expertise in the field of solid state particle technology enables it to meet specifications for particle size distribution (PSD), bulk density, specific surface area, polymorphism, as well as other characteristics.
Teva's API facilities meet all applicable current Good Manufacturing Practices (cGMP) requirements under U.S., European, Japanese, and other applicable quality standards. In some of the products that are sold in the U.S., TAPI utilizes controlled substances and therefore must meet the requirements of the Controlled Substances Act and the related regulations administered by the Drug Enforcement Administration.
Products
Teva api produces approximately 400 active pharmaceutical ingredients covering a wide range of products, including respiratory, cardiovascular, anti-cholesterol, central nervous system, dermatological, hormones, anti-inflammatory, oncology, immunosuppressants and muscle relaxants. Its API intellectual property portfolio includes over 1,200 granted patents and pending applications worldwide.
See also
Teva Pharmaceutical Industries
Active pharmaceutical ingredient
References
Pharmaceutical companies of Israel
Science and technology in Israel
Israeli brands
Pharmaceutical companies established in 1935
Life sciences industry
Companies based in Petah Tikva | Teva Active Pharmaceutical Ingredients | [
"Biology"
] | 848 | [
"Life sciences industry"
] |
32,451,837 | https://en.wikipedia.org/wiki/Ramp%20generator | In electronics and electrical engineering, a ramp generator is a circuit that creates a linear rising or falling output with respect to time. The output variable is usually voltage, although current ramps can be created.
Linear ramp generators are also known as sweep generators
Ramp generators produces a sawtooth wave form, Suppose a 3V is applied to input of a comparator of X terminal and ramp generator at Y terminal. the ramp generator starts increasing its voltage but, still lower than input X terminal of the comparator the output shall be 1, As soon as the ramp voltage is equal to or more than X, comparator output goes low.
Applications
Voltage and current linear ramp generator find wide application in instrumentation and communication systems.
Ramp generators used in electrical generators or electric motors to avoid transients when changing a load. Some ramp generators present also the possibility to change the start-up and return flow time.
Implementation
Originally, ramp generators were implemented as analog hardware devices.
References
Analog circuits
Power engineering | Ramp generator | [
"Engineering"
] | 198 | [
"Analog circuits",
"Energy engineering",
"Electronic engineering",
"Power engineering",
"Electrical engineering"
] |
32,451,857 | https://en.wikipedia.org/wiki/Tryptophol | Tryptophol is an aromatic alcohol that induces sleep in humans. It is found in wine as a secondary product of ethanol fermentation. It was first described by Felix Ehrlich in 1912. It is also produced by the trypanosomal parasite in sleeping sickness.
It forms in the liver as a side-effect of disulfiram treatment.
Natural occurrences
Tryptophol can be found in Pinus sylvestris needles or seeds.
It is produced by the trypanosomal parasite (Trypanosoma brucei) in sleeping sickness (African trypanosomiasis).
Tryptophol is found in wine and beer as a secondary product of ethanol fermentation (a product also known as congener) by Saccharomyces cerevisiae.
It is also an autoantibiotic produced by the fungus Candida albicans.
It can also be isolated from the marine sponge Ircinia spiculosa.
Metabolism
Biosynthesis
It was first described by Felix Ehrlich in 1912. Ehrlich demonstrated that yeast attacks the natural amino acids essentially by splitting off carbon dioxide and replacing the amino group with hydroxyl. By this reaction, tryptophan gives rise to tryptophol. Tryptophan is first deaminated to 3-indolepyruvate. It is then decarboxylated to indole acetaldehyde by indolepyruvate decarboxylase. This latter compound is transformed to tryptophol by alcohol dehydrogenase.
It is formed from tryptophan, along with indole-3-acetic acid in rats infected by Trypanosoma brucei gambiense.
An efficient conversion of tryptophan to indole-3-acetic acid and/or tryptophol can be achieved by some species of fungi in the genus Rhizoctonia.
Biodegradation
In Cucumis sativus (cucumber), the enzymes indole-3-acetaldehyde reductase (NADH) and indole-3-acetaldehyde reductase (NADPH) use tryptophol to form (indol-3-yl)acetaldehyde.
Glycosides
The unicellular alga Euglena gracilis converts exogenous tryptophol to two major metabolites: tryptophol galactoside and an unknown compound (a tryptophol ester), and to minor amounts of indole-3-acetic acid, tryptophol acetate, and tryptophol glucoside.
Biological effects
Tryptophol and its derivatives 5-hydroxytryptophol and 5-methoxytryptophol, induce sleep in mice. It induces a sleep-like state that lasts less than an hour at the 250 mg/kg dose. These compounds may play a role in physiological sleep mechanisms. It may be a functional analog of serotonin or melatonin, compounds involved in sleep regulation.
Tryptophol shows genotoxicity in vitro.
Tryptophol is a quorum sensing molecule for the yeast Saccharomyces cerevisiae. It is also found in the bloodstream of patients with chronic trypanosomiasis. For that reason, it may be a quorum sensing molecule for the trypanosome parasite.
In the case of trypanosome infection, tryptophol decreases the immune response of the host.
As it is formed in the liver after ethanol ingestion or disulfiram treatment, it is also associated with the study of alcoholism. Pyrazole and ethanol have been shown to inhibit the conversion of exogenous tryptophol to indole-3-acetic acid and to potentiate the sleep-inducing hypothermic effects of tryptophol in mice.
It is a growth promoter of cucumber hypocotyl segments. The auxinic action in terms of embryo formation is even better for tryptophol arabinoside on Cucurbita pepo'' hypocotyl fragments.
See also
Wine chemistry
References
Auxins
Human drug metabolites
Hypnotics
Indoles
Primary alcohols | Tryptophol | [
"Chemistry",
"Biology"
] | 888 | [
"Hypnotics",
"Behavior",
"Sleep",
"Human drug metabolites",
"Chemicals in medicine"
] |
32,451,875 | https://en.wikipedia.org/wiki/C10H11NO | {{DISPLAYTITLE:C10H11NO}}
The molecular formula C10H11NO (molar mass: 161.20 g/mol, exact mass: 161.084064 u) may refer to:
Tryptophol
Abikoviromycin
Molecular formulas | C10H11NO | [
"Physics",
"Chemistry"
] | 61 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
32,452,192 | https://en.wikipedia.org/wiki/Hydrogen%20astatide | Hydrogen astatide, also known as astatine hydride, astatane, astatidohydrogen or hydroastatic acid, is a chemical compound with the chemical formula HAt, consisting of an astatine atom covalently bonded to a hydrogen atom. It thus is a hydrogen halide.
This chemical compound can dissolve in water to form hydroastatic acid, which exhibits properties very similar to the other five binary acids, and is in fact the strongest among them. However, it is limited in use due to its ready decomposition into elemental hydrogen and astatine, as well as the short half-life of the various isotopes of astatine. Because the atoms have a nearly equal electronegativity, and as the ion has been observed, dissociation could easily result in the hydrogen carrying the negative charge. Thus, a hydrogen astatide sample can undergo the following reaction:
This results in elemental hydrogen gas and astatine precipitate. Furthermore, a trend for hydrogen halides, or HX, is that enthalpy of formation becomes less negative, i.e., decreases in magnitude but increases in absolute terms, as the halide becomes larger. Whereas hydroiodic acid solutions are stable, the hydronium-astatide solution is clearly less stable than the water-hydrogen-astatine system. Finally, radiolysis from astatine nuclei could sever the H–At bonds.
Additionally, astatine has no stable isotopes. The most stable is astatine-210, which has a half-life of approximately 8.1 hours, making its chemical compounds especially difficult to work with, as the astatine will quickly decay into other elements.
Preparation
Hydrogen astatide can be produced by reacting astatine with hydrocarbons (such as ethane):
This reaction also produces the corresponding alkyl astatide, in this case ethyl astatide (astatoethane).
References
Hydrogen compounds
Astatine compounds
Nonmetal halides
Diatomic molecules
Hydrides
Mineral acids | Hydrogen astatide | [
"Physics",
"Chemistry"
] | 429 | [
"Acids",
"Inorganic compounds",
"Mineral acids",
"Molecules",
"Diatomic molecules",
"Matter"
] |
32,452,904 | https://en.wikipedia.org/wiki/Schultze%20reagent | Schultze reagent (also known as Chlor-Zinc-Iodine Solution) is an oxidizing mixture consisting of a saturated aqueous solution of potassium chlorate KClO3 and varying amounts of concentrated nitric acid HNO3. It is commonly used in palynologic macerations. It was invented by Max Schultze. It is used to determine whether a substance contains cellulose, by turning purple in its presence. It is corrosive and an environmental hazard.
References
Oxidizing mixtures | Schultze reagent | [
"Chemistry"
] | 110 | [
"Oxidizing mixtures",
"Inorganic compounds",
"Oxidizing agents",
"Inorganic compound stubs"
] |
36,635,165 | https://en.wikipedia.org/wiki/Physical%20acoustics | Physical acoustics is the area of acoustics and physics that studies interactions of acoustic waves with a gaseous, liquid or solid medium on macro- and micro-levels. This relates to the interaction of sound with thermal waves in crystals (phonons), with light (photons), with electrons in metals and semiconductors (acousto-electric phenomena), with magnetic excitations in ferromagnetic crystals (magnons), etc. Some recently developed experimental techniques include photo-acoustics, acoustic microscopy and acoustic emission. A long-standing interest is in acoustic and ultrasonic wave propagation and scattering in inhomogeneous materials, including composite materials and biological tissues.
There are two main classes of problems studied in physical acoustics. The first one concerns understanding how the physical properties of a medium (solid, liquid, or gas) influence the propagation of acoustic waves in this medium in order to use this knowledge for practical purposes. The second important class of problems studied in physical acoustics is to obtain the relevant information about a medium under consideration by measuring the properties of acoustic waves propagating through this medium.
See also
Acoustic attenuation
Acoustic levitation
Acoustic streaming
Acousto-electric effect
Acousto-optics
Elastic waves
Interdigital transducer
Longitudinal wave
Love wave
Nonlinear acoustics
Picosecond ultrasonics
Sonoluminescence
Rayleigh wave
Shear wave
Sound absorption
Sound velocity
Thermoacoustics
Acoustic radiation force
References
External links
Physical Acoustics Group, The Institute of Physics (IOP) and The Institute of Acoustics (IOA)
Anglo-French Physical Acoustics Conferences
NCPA - National Center for Physical Acoustics, The University of Mississippi
Acoustics | Physical acoustics | [
"Physics"
] | 347 | [
"Classical mechanics",
"Acoustics"
] |
36,638,816 | https://en.wikipedia.org/wiki/Armenian%20eternity%20sign | The Armenian eternity sign (⟨֎ ֍⟩, ) or Arevakhach (, "Sun Cross") is an ancient Armenian national symbol and a symbol of the national identity of the Armenian people. It is one of the most common symbols in Armenian architecture, carved on khachkars and on walls of churches.
Evolution and use
In medieval Armenian culture, the eternity sign symbolized the concept of everlasting, celestial life. From the 1st century BC, it appeared on Armenian steles; later it became part of khachkar symbolism. Around the 8th century the use of the Armenian symbol of eternity had become a long established national iconographical practice, and it has kept its meaning in modern times. Besides being one of the main components of khachkars, it can be found on church walls, tomb stones and other architectural monuments. Notable churches with the eternity sign include the Mashtots Hayrapet Church of Garni, Horomayr Monastery, Nor Varagavank, Tsitsernavank Monastery. An identical symbol appears in the reliefs of the Divriği Great Mosque and Hospital, and is likely a borrowing from earlier Armenian churches of the area. It can also be found on Armenian manuscripts.
The eternity sign is used on the logos of government agencies and on commemorative coins, as well as Armenian government agencies and non-government organizations and institutions in Armenia and the Armenian diaspora.
The symbol is also used by Armenian neopagan organizations and their followers. It is called by them "Arevakhach" (, "sun cross").
ArmSCII and Unicode
In ArmSCII, Armenian Standard Code for Information Interchange, an Armenian eternity sign has been encoded in 7-bit and 8-bit standard and ad hoc encodings since at least 1987. In 2010 the Armenian National Institute of Standards suggested encoding an Armenian Eternity sign in the Unicode character set, and both left-facing ⟨֎⟩ and right-facing ⟨֍⟩ Armenian eternity signs were included in Unicode version 7.0 when it was released in June 2014.
Gallery
Churches
Modern statues and sculptures
Logos
See also
Borjgali
Castro culture (Economy and Arts, Stonework, Metallurgy)
Hilarri (Basque steles)
Lists of national symbols
Petroglyph
Pictish stone
Picture stones of Gotland
Triskelion
References
External links
Hayastan All Armenian Fund. Telethon 2010 – Water is Life. "Water is Life indeed and as you can see in the design, the water turns into the Armenian eternity symbol as it flows out of the helping hands."
Downtown, North End. "Armenian Heritage Park to participate Saturday in World Labyrinth Day", Posted by Jeremy C. Fox April 29, 2013. – "A single jet of water and the symbol of eternity mark its center, representing hope and rebirth."
Armenian Engineers & Scientists of America. "The Armenian Engineers and Scientists of America (AESA) logo is an ancient symbol used in Armenian architecture and carvings. The symbol signifies Eternal Life – in Armenian Haverjoutian Nshan or Sign of Eternity."
Armenian Monuments Awareness Project
Ancient Armenian religion
Armenian mythology
Infinity
Mathematical symbols
National symbols of Armenia
Rotational symmetry
Time in Armenia | Armenian eternity sign | [
"Physics",
"Mathematics"
] | 652 | [
"Mathematical objects",
"Infinity",
"Symmetry",
"Rotational symmetry"
] |
36,639,312 | https://en.wikipedia.org/wiki/Deal.II | deal.II is a free, open-source library to solve partial differential equations using the finite element method. The current release is version 9.6, released in August 2024. The founding authors of the project — Wolfgang Bangerth, Ralf Hartmann, and Guido Kanschat — won the 2007 J. H. Wilkinson Prize for Numerical Software for deal.II. Today, it is a worldwide project managed by around a dozen "Principal Developers"; over the years several hundred people have contributed substantial pieces of code or documentation to the project.
Features
The library features
dimension independent programming using C++ templates on locally adapted meshes,
a large collection of different finite elements of any order: continuous and discontinuous Lagrange elements, Nedelec elements, Raviart-Thomas elements, and combinations,
parallelization using multithreading through TBB and massively parallel using MPI. deal.II has been shown to scale to at least 16,000 processors and has been used in applications on up to 300,000 processor cores.
multigrid method with local smoothing on adaptively refined meshes
hp-FEM
extensive documentation and tutorial programs,
interfaces to several libraries including Gmsh, PETSc, Trilinos, METIS, SUNDIALS, VTK, p4est, BLAS, LAPACK, HDF5, NetCDF, and Open Cascade Technology.
History and Impact
The software started from work at the Numerical Methods Group at Heidelberg University in Germany in 1998. The first public release was version 3.0.0 in 2000. Since then deal.II has gotten contributions from several hundred authors and has been used in more than 2,400 research publications.
The primary maintainers, coordinating the worldwide development of the library, are today located at Colorado State University, Clemson University, Texas A&M University, Oak Ridge National Laboratory and a number of other institutions. It is developed as a worldwide community of contributors through GitHub that incorporates several hundred changes by dozens of authors every month.
See also
List of finite element software packages
List of numerical analysis software
References
External links
Source Code on Github
List of Scientific publications
Free computer libraries
Differential calculus
Finite element software for Linux
C++ numerical libraries
Software that uses VTK | Deal.II | [
"Mathematics"
] | 460 | [
"Differential calculus",
"Calculus"
] |
36,643,448 | https://en.wikipedia.org/wiki/Equidissection | In geometry, an equidissection is a partition of a polygon into triangles of equal area. The study of equidissections began in the late 1960s with Monsky's theorem, which states that a square cannot be equidissected into an odd number of triangles. In fact, most polygons cannot be equidissected at all.
Much of the literature is aimed at generalizing Monsky's theorem to broader classes of polygons. The general question is: Which polygons can be equidissected into how many pieces? Particular attention has been given to trapezoids, kites, regular polygons, centrally symmetric polygons, polyominos, and hypercubes.
Equidissections do not have many direct applications. They are considered interesting because the results are counterintuitive at first, and for a geometry problem with such a simple definition, the theory requires some surprisingly sophisticated algebraic tools. Many of the results rely upon extending p-adic valuations to the real numbers and extending Sperner's lemma to more general colored graphs.
Overview
Definitions
A dissection of a polygon P is a finite set of triangles that do not overlap and whose union is all of P. A dissection into n triangles is called an n-dissection, and it is classified as an even dissection or an odd dissection according to whether n is even or odd.
An equidissection is a dissection in which every triangle has the same area. For a polygon P, the set of all n for which an n-equidissection of P exists is called the spectrum of P and denoted S(P). A general theoretical goal is to compute the spectrum of a given polygon.
A dissection is called simplicial if the triangles meet only along common edges. Some authors restrict their attention to simplicial dissections, especially in the secondary literature, since they are easier to work with. For example, the usual statement of Sperner's lemma applies only to simplicial dissections. Often simplicial dissections are called triangulations, although the vertices of the triangles are not restricted to the vertices or edges of the polygon. Simplicial equidissections are therefore also called equal-area triangulations.
The terms can be extended to higher-dimensional polytopes: an equidissection is set of simplexes having the same n-volume.
Preliminaries
It is easy to find an n-equidissection of a triangle for all n. As a result, if a polygon has an m-equidissection, then it also has an mn-equidissection for all n. In fact, often a polygon's spectrum consists precisely of the multiples of some number m; in this case, both the spectrum and the polygon are called principal and the spectrum is denoted . For example, the spectrum of a triangle is . A simple example of a non-principal polygon is the quadrilateral with vertices (0, 0), (1, 0), (0, 1), (3/2, 3/2); its spectrum includes 2 and 3 but not 1.
Affine transformations of the plane are useful for studying equidissections, including translations, uniform and non-uniform scaling, reflections, rotations, shears, and other similarities and linear maps. Since an affine transformation preserves straight lines and ratios of areas, it sends equidissections to equidissections. This means that one is free to apply any affine transformation to a polygon that might give it a more manageable form. For example, it is common to choose coordinates such that three of the vertices of a polygon are (0, 1), (0, 0), and (1, 0).
The fact that affine transformations preserve equidissections also means that certain results can be easily generalized. All results stated for a regular polygon also hold for affine-regular polygons; in particular, results concerning the unit square also apply to other parallelograms, including rectangles and rhombuses. All results stated for polygons with integer coordinates also apply to polygons with rational coordinates, or polygons whose vertices fall on any other lattice.
Best results
Monsky's theorem states that a square has no odd equidissections, so its spectrum is . More generally, it is known that centrally symmetric polygons and polyominos have no odd equidissections. A conjecture by Sherman K. Stein proposes that no special polygon has an odd equidissection, where a special polygon is one whose equivalence classes of parallel edges each sum to the zero vector. Squares, centrally symmetric polygons, polyominos, and polyhexes are all special polygons.
For n > 4, the spectrum of a regular n-gon is . For n > 1, the spectrum of an n-dimensional cube is , where n! is the factorial of n. and the spectrum of an n-dimensional cross-polytope is . The latter follows mutatis mutandis from the proof for the octahedron in
Let T(a) be a trapezoid where a is the ratio of parallel side lengths. If a is a rational number, then T(a) is principal. In fact, if r/s is a fraction in lowest terms, then . More generally, all convex polygons with rational coordinates can be equidissected, although not all of them are principal; see the above example of a kite with a vertex at (3/2, 3/2).
At the other extreme, if a is a transcendental number, then T(a) has no equidissection. More generally, no polygon whose vertex coordinates are algebraically independent has an equidissection. This means that almost all polygons with more than three sides cannot be equidissected. Although most polygons cannot be cut into equal-area triangles, all polygons can be cut into equal-area quadrilaterals.
If a is an algebraic irrational number, then T(a) is a trickier case. If a is algebraic of degree 2 or 3 (quadratic or cubic), and its conjugates all have positive real parts, then S(T(a)) contains all sufficiently large n such that n/(1 + a) is an algebraic integer. It is conjectured that a similar condition involving stable polynomials may determine whether or not the spectrum is empty for algebraic numbers a of all degrees.
History
The idea of an equidissection seems like the kind of elementary geometric concept that should be quite old. remark of Monsky's theorem, "one could have guessed that surely the answer must have been known for a long time (if not to the Greeks)." But the study of equidissections did not begin until 1965, when Fred Richman was preparing a master's degree exam at New Mexico State University.
Monsky's theorem
Richman wanted to include a question on geometry in the exam, and he noticed that it was difficult to find (what is now called) an odd equidissection of a square. Richman proved to himself that it was impossible for 3 or 5, that the existence of an n-equidissection implies the existence of an -dissection, and that certain quadrilaterals arbitrarily close to being squares have odd equidissections. However, he did not solve the general problem of odd equidissections of squares, and he left it off the exam. Richman's friend John Thomas became interested in the problem; in his recollection,
"Everyone to whom the problem was put (myself included) said something like 'that is not my area but the question surely must have been considered and the answer is probably well known.' Some thought they had seen it, but could not remember where. I was interested because it reminded me of Sperner's Lemma in topology, which has a clever odd-even proof."
Thomas proved that an odd equidissection was impossible if the coordinates of the vertices are rational numbers with odd denominators. He submitted this proof to Mathematics Magazine, but it was put on hold:
"The referee's reaction was predictable. He thought the problem might be fairly easy (although he could not solve it) and was possibly well-known (although he could find no reference to it)."
The question was instead given as an Advanced Problem in the American Mathematical Monthly . When nobody else submitted a solution, the proof was published in Mathematics Magazine , three years after it was written. then built on Thomas' argument to prove that there are no odd equidissections of a square, without any rationality assumptions.
Monsky's proof relies on two pillars: a combinatorial result that generalizes Sperner's lemma and an algebraic result, the existence of a 2-adic valuation on the real numbers. A clever coloring of the plane then implies that in all dissections of the square, at least one triangle has an area with what amounts to an even denominator, and therefore all equidissections must be even. The essence of the argument is found already in , but was the first to use a 2-adic valuation to cover dissections with arbitrary coordinates.
Generalizations
The first generalization of Monsky's theorem was , who proved that the spectrum of an n-dimensional cube is . The proof is revisited by .
Generalization to regular polygons arrived in 1985, during a geometry seminar run by G. D. Chakerian at UC Davis. Elaine Kasimatis, a graduate student, "was looking for some algebraic topic she could slip into" the seminar. Sherman Stein suggested dissections of the square and the cube: "a topic that Chakerian grudgingly admitted was geometric." After her talk, Stein asked about regular pentagons. Kasimatis answered with , proving that for n > 5, the spectrum of a regular n-gon is . Her proof builds on Monsky's proof, extending the p-adic valuation to the complex numbers for each prime divisor of n and applying some elementary results from the theory of cyclotomic fields. It is also the first proof to explicitly use an affine transformation to set up a convenient coordinate system. then framed the problem of finding the spectrum of a general polygon, introducing the terms spectrum and principal. They proved that almost all polygons lack equidissections, and that not all polygons are principal.
began the study of the spectra of two particular generalizations of squares: trapezoids and kites. Trapezoids have been further studied by , , and . Kites have been further studied by . General quadrilaterals have been studied in . Several papers have been authored at Hebei Normal University, chiefly by Professor Ding Ren and his students Du Yatao and Su Zhanjun.
Attempting to generalize the results for regular n-gons for even n, conjectured that no centrally symmetric polygon has an odd equidissection, and he proved the n = 6 and n = 8 cases. The full conjecture was proved by . A decade later, Stein made what he describes as "a surprising breakthrough", conjecturing that no polyomino has an odd equidissection. He proved the result of a polyomino with an odd number of squares in . The full conjecture was proved when treated the even case.
The topic of equidissections has recently been popularized by treatments in The Mathematical Intelligencer , a volume of the Carus Mathematical Monographs , and the fourth edition of Proofs from THE BOOK .
Related problems
consider a variation of the problem: Given a convex polygon K, how much of its area can be covered by n non-overlapping triangles of equal area inside K? The ratio of the area of the best possible coverage to the area of K is denoted tn(K). If K has an n-equidissection, then tn(K) = 1; otherwise it is less than 1. The authors show that for a quadrilateral K, tn(K) ≥ 4n/(4n + 1), with t2(K) = 8/9 if and only if K is affinely congruent to the trapezoid T(2/3). For a pentagon, t2(K) ≥ 2/3, t3(K) ≥ 3/4, and tn(K) ≥ 2n/(2n + 1) for n ≥ 5.
Günter M. Ziegler asked the converse problem in 2003: Given a dissection of the whole of a polygon into n triangles, how close can the triangle areas be to equal? In particular, what is the smallest possible difference between the areas of the smallest and largest triangles? Let the smallest difference be M(n) for a square and M(a, n) for the trapezoid T(a). Then M(n) is 0 for even n and greater than 0 for odd n. gave the asymptotic upper bound M(n) = O(1/n2) (see Big O notation). improves the bound to M(n) = O(1/n3) with a better dissection, and he proves that there exist values of a for which M(a, n) decreases arbitrarily quickly. obtain a superpolynomial upper bound, derived from an explicit construction that uses the Thue–Morse sequence.
References
Bibliography
Secondary sources
Primary sources
Reprinted as
External links
Sperner’s Lemma, Brouwer’s Fixed-Point Theorem, And The Subdivision Of Squares Into Triangles - Notes by Akhil Mathew
Über die Zerlegung eines Quadrats in Dreiecke gleicher Fläche - Notes by Moritz W. Schmitt (German language)
Tiling Polygons by Triangles of Equal Area - Notes by AlexGhitza
Discrete geometry
Affine geometry
Geometric dissection | Equidissection | [
"Mathematics"
] | 2,987 | [
"Discrete geometry",
"Discrete mathematics"
] |
36,643,525 | https://en.wikipedia.org/wiki/Canadian%20Synchrotron%20Radiation%20Facility | The Canadian Synchrotron Radiation Facility (CSRF) () was Canada's national synchrotron facility from 1983 to 2005. Eventually consisting of three beamlines at the Synchrotron Radiation Center at the University of Wisconsin–Madison, US, it served the Canadian synchrotron community until the opening of the Canadian Light Source in Saskatoon, Saskatchewan, finally ceasing operations in 2008.
Beginnings
In 1972 Mike Bancroft, a chemistry professor at the University of Western Ontario (UWO) took part in a workshop organised by Bill McGowan on the uses of synchrotron radiation. At the time there were no synchrotron users in Canada, but as a result of contact established with then-director Ed Rowe at the meeting, he began work at the Synchrotron Radiation Center (SRC) in Madison, Wisconsin, in 1975.
After several failed attempts were made to establish a synchrotron facility in Canada, Bancroft submitted a proposal to the National Research Council (NRC) to build a Canadian beamline on the existing Tantalus synchrotron at SRC. Rowe had offered Bancroft 100% use of the beamline at no charge in perpetuity – Bancroft recalled that Rowe "had a soft spot for Canadians, he had some relatives from Canada, so he was extremely helpful". In 1978 the newly created NSERC awarded capital funding. This was not sufficient, and further funding was obtained from the UWO Academic Development Fund and NSERC the following year to complete two endstations. Bancoft would later say "We hoped to get more beamlines so we called it the Canadian Synchrotron Radiation Facility (CSRF)". Bancroft was appointed Scientific Director, with Norman Sherman of NRC, who were to own and manage the facility, as manager. Operating money was initially provided by UWO, and Kim Tan was hired as the CSRF operations manager, to be based in Madison.
1978–1988: Grasshopper beamline
A Grasshopper-type monochromator – so-called as its mechanical drive arm resembled a grasshopper's hind legs – was ordered from Baker Engineering. This type of monochromator had been specifically designed for use with synchrotron radiation, and had proven easy to use, rugged and dependable at the existing SRC ring, Tantalus, and at the Stanford Synchrotron Radiation Laboratory. The beamline was installed within a year, and by late 1981 initial results showed the performance to be state of the art over the 50–500eV photon energy range.
Notable early work included X-ray microscopy on biological samples, and gas-phase spectroscopy with a very influential series of papers on noble gases. In the mid-1980s the number of publications steadily increased, as did operating funding through NRC and NSERC.
SRC was building a new synchrotron, Aladdin, and again Rowe offered CSRF 100% use of their beamline at no change in perpetuity on the new machine. Aladdin was seriously delayed, to the point where its funding was cut and future seemed highly uncertain.
With the new machine's performance improving, the decision was made to transfer the beamline to Aladdin in January 1986, some months before Aladdin's funding was restored. Bancroft later commented: "We were I think, the first beamline to transfer over, maybe we took a little bit of a risk because Aladdin's performance wasn't completely confirmed".
On Aladdin, with the higher X-ray intensity, new areas of science were opened up and the number of users increased mostly focused on X-ray absorption and photoemission spectroscopy of gases and solids. A photoemission spectrometer was donated by Ron Cavell of the University of Alberta and modified for high resolution gas-phase work.
1988–1998: DCM beamline
The X-ray intensity from Aladdin was much higher than on Tantalus, especially in the photon energy range up to 4000eV. These higher energies were potentially available using higher energy monochromators than the Grasshopper. In 1987, with Bancroft now chair of the Chemistry department at UWO he planned for a new beamline to cover the 1500-400eV energy range. A successful application was made to the recently formed Ontario Centre for Materials Research, and T.K. Sham was hired away from Brookhaven National Laboratory to design the beamline. A double crystal monochromator (DCM) was selected, to be built by the Madison Physical Sciences Laboratory, using a cylindrical mirror with a novel bending mechanism to focus the X-ray beam after the monochromator. B.X. Yang, also from Brookhaven, was hired in 1988 to construct the beamline. The beamline was built in less than 18 months, and was officially opened in 1990.
The CSRF DCM beamline was regarded as particularly notable by SRC as it was the only beamline at the facility to reach energies higher than 1500 eV.
With two beamlines, use by the Canadian community increased, with more than 40 scientists from 10 Canadian institutions using the facility from 1990 to 1992. Funding was now stable and adequate, with no charges to users.
1998–2008: SGM beamline
The energy range from 300 to 1500 eV was still unavailable at CSRF, so in 1992 Bancroft applied to NSERC for a third beamline. Funding was obtained in 1994 and Brian Yates, who had been Bancroft's first synchrotron PhD student, was hired to construct the beamline. The design chosen was a so-called Dragon-type spherical grating monochromator, with a single grating covering the range 240–700 eV, designed and manufactured by MacPherson Inc. The beamline was somewhat delayed, but was operational for users in 1998. Adam Hitchcock of McMaster University donated a photoemission spectrometer for coincidence measurements.
For the last 10 years of its existence CSRF was managed by Walter Davidson of NRC, with T.K. Sham (UWO) as Scientific Director. In 2004 the SGM beamline was decommissioned and taken to Canada for use on the new Canadian facility, while the remaining two beamlines, 30 and 15 years old, were still working well in 2007.
The Canadian Light Source and the end of CSRF
Following a prolonged campaign by the Canadian synchrotron user community, the decision was made in 1999 to build a Canadian synchrotron in Saskatoon, Saskatchewan: the Canadian Light Source (CLS). A proposal was made by the CSRF user community to take all three CSRF beamlines to the CLS and install them on the newer synchrotron. However the CLS' Facility Advisory Committee recommended that only the SGM beamline be re-used, with newer replacements constructed for the other two beamlines. In the event, only the monochromator and exit slit mechanism of the SGM beamline were taken to Canada and re-used, with some modifications, in the beamline of the same name at the CLS. The Grasshopper monochromator was also taken to the CLS, where it is now a museum piece, while the DCM beamline was left at SRC where it continues in use. At the CLS the VLS-PGM and SXRMB beamlines, respectively were built to replace those two beamlines.
CSRF formally ceased operations on March 31, 2008. Several ex-CSRF personnel, including Kim Tan, moved to the CLS, and the Saskatoon laboratory employed many former CSRF users. Emil Hallin, then of the Saskatchewan Accelerator Laboratory which designed the CLS, now its Director of Strategic Scientific Development, got his first experience of synchrotron beamlines at CSRF.
References
National Research Council (Canada)
Synchrotron radiation facilities
University of Wisconsin–Madison
Canadian federal government buildings | Canadian Synchrotron Radiation Facility | [
"Materials_science"
] | 1,614 | [
"Materials testing",
"Synchrotron radiation facilities"
] |
60,100,177 | https://en.wikipedia.org/wiki/Chloropolymer | Chloropolymers are macromolecules synthesized from alkenes in which one or more hydrogens of the polymer were replaced by chlorine. A common example of a chloropolymer is polyvinyl chloride (PVC) and poly(dichlorophosphazene) which has a polymer formula of (PNCl2)n, the precursor of which is hexachlorophosphazene, which itself has been called chloropolymer.
References
chloropol
Polymer chemistry | Chloropolymer | [
"Chemistry",
"Materials_science",
"Engineering"
] | 116 | [
"Polymers",
"Materials science",
"Polymer chemistry"
] |
60,102,094 | https://en.wikipedia.org/wiki/BLESS | BLESS, also known as breaks labeling, enrichment on streptavidin and next-generation sequencing, is a method used to detect genome-wide double-strand DNA damage. In contrast to chromatin immunoprecipitation (ChIP)-based methods of identifying DNA double-strand breaks (DSBs) by labeling DNA repair proteins, BLESS utilizes biotinylated DNA linkers to directly label genomic DNA in situ which allows for high-specificity enrichment of samples on streptavidin beads and the subsequent sequencing-based DSB mapping to nucleotide resolution.
Workflow
Biotinylated linker design
The biotinylated linker is designed to form a hairpin structure that specifically labels DSBs and not single-strand DNA breaks. The linker has a blunt, ligatable end with a known barcode sequence that labels the site of ligation as well as an XhoI restriction enzyme recognition site adjacent to the barcode. The hairpin loop of the linker is covalently bound to a biotin molecule, allowing for subsequent enrichment of labeled DNA with streptavidin beads.
Use of biotin labels allows for specific binding without disruption of DNA due to the small size of the marker. Because biotin also has high affinity to streptavidin, further highly specific purification can be performed on streptavidin beads.
Nuclei purification and in situ labeling
Following the induction of DSBs, cells are fixed with formaldehyde, lysed, and treated with proteinases to purify intact nuclei. The initial fixation step stabilizes chromatin and prevents the formation of additional DSBs during sample preparation. DSBs are then blunted and incubated with biotinylated linkers in the presence of T4 DNA ligase. T4 ligase does not recognize single-stranded breaks, and as such directly labels the DSB sites through covalent attachment of the biotinylated linker.
DNA extraction, fragmentation, and purification
Labeled genomic DNA is extracted from nuclei and fragmented by HaeIII restriction enzyme digestion and sonication. Labeled DNA fragments are then purified using beads derived from streptavidin, a biotin-binding protein found in the bacterium Streptomyces avidinii. Because the interaction of streptavidin and biotin is strong and highly specific, purification of sample on streptavidin-coated beads allows for robust enrichment of labeled DNA fragments.
Distal linker DNA labeling and digestion
A second labeling step occurs after fragmentation and biotin-streptavidin affinity purification to attach primer binding sites to the free end of the captured DNA. Similar to the first labeling step, T4 DNA ligase is used to attach a distal linker to the unlabeled end of the DNA. The distal linker also has an XhoI restriction enzyme recognition site but is not covalently bound to a biotin molecule. Once the distal linker is attached, the captured DNA fragments are digested using I-SceI endonucleases that cut both the biotinylated linkers and the distal linkers to release the DNA fragments.
PCR amplification and sequencing
The digested DNA strands are amplified using PCR with primers complementary to barcode sequences in the biotinylated linker and the distal linker. The amplified DNA is further processed by digesting with XhoI restriction enzymes to remove the I-SceI ends and purified prior to sequencing. Although use of next-generation sequencing methods is recommended for BLESS analysis, Sanger sequencing has also been shown to generate successful, albeit less robust results.
Computational analysis
The BLESS sequencing reads can be analyzed using the Instant Sequencing (iSeq) software suite. To detect sites of DSBs, reads are aligned to a reference genome using bowtie to determine the chromosome positions. The genome is divided into intervals and hypergeometric tests are used to identify intervals enriched with mapped reads. DSBs are identified by comparing enrichment in treated samples versus a control. A statistically significant increase in a DNA damage-induced sample suggests that the DNA at this interval is fragile and enriched in DSBs.
Advantages
Use of biotinylated DNA linkers designed to specifically recognize double-stranded DNA breaks allows for a less biased, more direct survey of the breakome without the need to rely on native and/or DSB-proxy proteins, such as the phosphorylated histone variant H2A.X (γH2A.X), in the cell. Because of this, BLESS can be utilized in a variety of cells from different organisms.
For the same reason, BLESS is also sensitive to multiple sources of double-stranded breaks, such as chemical and physical DNA disruption, replication fork stalling, as well as presence of telomere ends. This makes BLESS suitable for analysis of cells at various conditions.
Labeling of DSBs occurs in situ, reducing the risk of false positives form detection of DNA breaks due to mechanical shearing and chemical sample treatment.
Limitations
Due to specificity of the linker design, biotinylated markers can only label double-stranded DNA breaks at blunt, not cohesive ends, leading to less efficient ligation.
Compared to newer breakome survey methods, such as BLISS, BLESS requires large amounts of cellular starting material for successful analysis, resulting in tedious and time-consuming sample preparation and processing. To process 24 samples, the BLESS protocol requires 60 work-hours over the course of 15 days whereas BLISS requires 12 work-hours over 5 days.
Because cells require chemical fixation prior to DNA extraction, BLESS is prone to high background noise from fixation artifacts. However, stringent custom optimization has been shown to reduce this issue.
Due to the lack of PCR controls, BLESS is not a fully quantitative method and is prone to amplification bias, resulting in poor scalability.
BLESS is only suitable for detecting double-stranded breaks at one specific time in the genome, as compared to continuous analysis.
Alternative methods
Breaks labeling in situ and sequencing (BLISS)In BLISS, cells or tissue sections are attached to a cover glass first before DSB labeling. This allows some centrifugation steps to be omitted, thus decreasing the number of artificial DSBs introduced from sample preparation, and reducing sample loss. Importantly, it allows a much smaller amount of starting material to be used compared to BLESS. Another improvement is the use of in vitro transcription to generate and amplify RNA sequences for library preparation. BLISS uses T7 bacteriophage-mediated transcription rather than PCR, reducing errors caused by PCR amplification bias that occur with BLESS.
Immobilized-BLESS (i-BLESS) A limitation of the original BLESS method is that it is problematic in application to smaller cells such as yeast cells. While low centrifugation speeds employed during nuclei isolation are not efficient enough for small cells, increasing centrifugation speeds can shear the genomic DNA. However, in i-BLESS, cells are immobilized in agarose beads prior to DSB labeling. This allows the use of higher centrifugation speeds without artificial DNA shearing. The remainder of the DSB labeling procedure follows that of the BLESS method, and labeled DNA fragments are recovered from the agarose beads prior to the streptavidin capture step. The i-BLESS method is not limited to yeast and can theoretically be applied to all cells.
DSBCapture Similar to BLESS, DSBCapture uses biotinylated adapters to label DSBs in situ and streptavidin beads to isolate labeled DNA fragments for amplification and sequencing. While labeling in BLESS relies on blunt-end ligation, DSBCapture uses more efficient cohesive-end ligation to attach biotinylated modified Illumina adapters. In addition, DSBCapture relies on fewer PCR steps compared to BLESS, reducing amplification bias. This method also generates libraries with higher sequence diversity than BLESS, eliminating the need to spike in other libraries to improve diversity prior to sequencing. Furthermore, DSBCapture uses single-end sequencing in contrast to BLESS where sequencing can begin from both ends. Single-end sequencing results reflect only the sequences of DSB sites, improving data yield.
GUIDE-Seq Also known as Genome-Wide Unbiased Identification of DSBs Enabled by Sequencing, GUIDE-Seq uses the incorporation of double-stranded oligodeoxynucleotide (dsODN) sequences to label sites of DSBs in living cells. It allows DSBs to be labeled over an extended time period, and the sites of DNA damage identified through GUIDE-Seq reflect accumulated DSBs. In contrast, BLESS only labels and detects transient DSBs that exist when the cells were fixed.
Applications
While double-stranded breaks in the DNA can be caused by various sources of disruption, they are often observed at high frequency during apoptosis and can contribute to genome instability, resulting in oncogenic mutations. For this reason, high-resolution, specific DSB-mapping methods like BLESS are useful for breakome surveys.
DSBs can be artificially induced using genome editing technologies such as CRISPR-Cas9 or TALEN. These technologies may lead to unintentional modifications of DNA at off-target locations on the genome. Since BLESS can identify the nucleotide position of DSBs, it can be used to determine if off-target genome editing has occurred and the location of DSBs unintentionally introduced by these nuclease systems.
References
External links
BLESS Supporting Website
Molecular genetics
DNA repair | BLESS | [
"Chemistry",
"Biology"
] | 1,973 | [
"Molecular genetics",
"Cellular processes",
"DNA repair",
"Molecular biology"
] |
57,152,266 | https://en.wikipedia.org/wiki/Neoclassical%20transport | In plasma physics and magnetic confinement fusion, neoclassical transport or neoclassical diffusion is a theoretical description of collisional transport in toroidal plasmas, usually found in tokamaks or stellarators. It is a modification of classical diffusion adding in effects of non-uniform magnetic fields due to the toroidal geometry, which give rise to new diffusion effects.
Description
Classical transport models a plasma in a magnetic field as a large number of particles traveling in helical paths around a line of force. In typical reactor designs, the lines are roughly parallel, so particles orbiting adjacent lines may collide and scatter. This results in a random walk process which eventually leads to the particles finding themselves outside the magnetic field.
Neoclassical transport adds the effects of the geometry of the fields. In particular, it considers the field inside the tokamak and similar toroidal arrangements, where the field is stronger on the inside curve than the outside simply due to the magnets being closer together in that area. To even out these forces, the field as a whole is twisted into a helix, so that the particles alternately move from the inside to the outside of the reactor.
In this case, as the particle transits from the outside to the inside, it sees an increasing magnetic force. If the particle energy is low, this increasing field may cause the particle to reverse directions, as in a magnetic mirror. The particle now travels in the reverse direction through the reactor, to the outside limit, and then back towards the inside where the same reflection process occurs. This leads to a population of particles bouncing back and forth between two points, tracing out a path that looks like a banana from above, the so-called banana orbits.
Since any particle in the long tail of the Maxwell–Boltzmann distribution is subject to this effect, there is always some natural population of such banana particles. Since these travel in the reverse direction for half of their orbit, their drift behavior is oscillatory in space. Therefore, when the particles collide, their average step size (width of the banana) is much larger than their gyroradius, leading to neoclassical diffusion across the magnetic field.
Trapped particles and banana orbits
A consequence of the toroidal geometry to the guiding-center orbits is that some particles can be reflected on the trajectory from the outboard side to the inboard side due to the presence of magnetic field gradients, similar to a magnetic mirror. The reflected particles cannot do a full turn in the poloidal plane and are trapped which follow the banana orbits.
This can be demonstrated by considering tokamak equilibria for low- and large aspect ratio which have nearly circular cross sections, where polar coordinates centered at the magnetic axis can be used with approximately describing the flux surfaces. The magnitude of the total magnetic field can be approximated by the following expression:where the subscript indicates value at the magnetic axis , is the major radius, is the inverse aspect ratio, and is the magnetic field. The parallel component of the drift-ordered guiding-center orbits in this magnetic field, assuming no electric field, is given by:
where is the particle mass, is the velocity, and is the magnetic moment (first adiabatic invariant). The direction in the subscript indicates parallel or perpendicular to the magnetic filed. is the effective potential reflecting the conservation of kinetic energy .
The parallel trajectory experiences a mirror force where the particle moving into a magnetic field of increasing magnitude can be reflected by this force. If a magnetic field has a minimum along a field line, the particles in this region of weaker field can be trapped. This is indeed true given the form of we use. The particles are reflected (trapped particles) for sufficiently large or complete their poloidal turn (passing particles) otherwise.
To see this in detail, the maximum and minimum of the effective potential can be identified as and . The passing particles have and the trapped particles have . Recognising this and define a constant of motion , we have
Passing:
Trapped:
Orbit width
The orbit width can be estimated by considering the variation in over an orbit period . Using the conservation of and ,The orbit widths can then be estimated, which gives
Passing width:
Banana width:
The bounce angle at which becomes zero for the trapped particles is
Bounce time
The bounce time is the time required for a particle to complete its poloidal orbit. This is calculated bywhere . The integral can be rewritten aswhere and , which is also equivalent to for trapped particles. This can be evaluated using the results from the complete elliptic integral of the first kindwith propertiesThe bounce time for passing particles is obtained by integrating between where the bounce time for trapped particle is evaluated by integrating between and taking The limiting cases are
Super passing:
Super trapped:
Barely trapped:
Neoclassical transport regimes
Banana regime
Pfirsch-Schlüter regime
Plateau regime
See also
Wendelstein 7-X
References
Fusion power
Transport phenomena
Diffusion
Tokamaks | Neoclassical transport | [
"Physics",
"Chemistry",
"Engineering"
] | 995 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion",
"Plasma physics",
"Chemical engineering",
"Fusion power",
"Nuclear fusion"
] |
57,159,396 | https://en.wikipedia.org/wiki/Teck%20cable | Teck cable is a type of low voltage armoured cable named for the location where it was first developed and used, Teck Township, now known as Kirkland Lake, Ontario. The mining operations such as those conducted by Teck-Hughes Gold Mining Ltd. required a durable cable to power equipment and withstand the demanding conditions, and teck cable was the engineered result.
In Canada, teck cable is defined by CSA standard C22.2 No. 131 and carries the type designation of TECK 90, the 90 referring to the maximum conductor temperature in degrees Celsius that the cable may be used at in a maximum 30 C ambient environment without de-rating its ampacity.
Uses
Teck cable is a very versatile power cable because it may be used where subject to limited mechanical damage, it is resistant to water, petrochemicals, and sunlight. It may be used for direct-earth burial and if properly sealed at the connection point, in explosive atmospheres such as gasoline dispensing stations.
When used in industrial environment, it is usually contained in a cable tray with many other cables powering the motors that run the industrial processes.
In the commercial environment, it may also be found in a tray, but is often fastened to a wall, truss, or other part of the building structure. It may be used to power any equipment such as compressors, heaters, or commercial dishwashers, and is also used for power distribution within buildings.
Teck cable may be used to power hot tubs, garages or other outbuildings, and sub-panels in the residential setting.
Construction
Teck cable is composed of copper conductors sheathed in cross-linked polyethylene, bundled by plasticized PVC, wrapped in interlocking aluminum armour, all sealed by another coat of PVC.
See also
Modern mining
References
Electrical wiring
Power cables | Teck cable | [
"Physics",
"Engineering"
] | 381 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
57,159,859 | https://en.wikipedia.org/wiki/Fracton%20%28subdimensional%20particle%29 | A fracton is an emergent topological quasiparticle excitation which is immobile when in isolation. Many theoretical systems have been proposed in which fractons exist as elementary excitations. Such systems are known as fracton models. Fractons have been identified in various CSS codes as well as in symmetric tensor gauge theories.
Gapped fracton models often feature a topological ground state degeneracy that grows exponentially and sub-extensively with system size. Among the gapped phases of fracton models, there is a non-rigorous phenomenological classification into "type I" and "type II". Type I fracton models generally have fracton excitations that are completely immobile, as well as other excitations, including bound states, with restricted mobility. Type II fracton models generally have fracton excitations and no mobile particles of any form. Furthermore, isolated fracton particles in type II models are associated with nonlocal operators with intricate fractal structure.
Models
Type I
The paradigmatic example of a type I fracton model is the X-cube model. Other examples of type I fracton models include the semionic X-cube model, the checkerboard model, the Majorana checkerboard model, the stacked Kagome X-cube model, the hyperkagome X-cube model, and more.
X-cube model
The X-cube model is constructed on a cubic lattice, with qubits on each edge of the lattice.
The Hamiltonian is given by
Here, the sums run over cubic unit cells and over vertices. For any cubic unit cell , the operator is equal to the product of the Pauli operator on all 12 edges of that unit cube. For any vertex of the lattice , operator is equal to the product of the Pauli operator on all four edges adjacent to vertex and perpendicular to the axis. Other notation conventions in the literature may interchange and .
In addition to obeying an overall symmetry defined by global symmetry generators and where the product runs over all edges in the lattice, this Hamiltonian obeys subsystem symmetries acting on individual planes.
All of the terms in this Hamiltonian commute and belong to the Pauli algebra. This makes the Hamiltonian exactly solvable. One can simultaneously diagonalise all the terms in the Hamiltonian, and the simultaneous eigenstates are the Hamiltonian's energy eigenstates. A ground state of this Hamiltonian is a state that satisfies and for all . One can explicitly write down a ground state using projection operators and .
The constraints posed by and are not all linearly independent when the X cube model is embedded on a compact manifold. This leads to a large ground state degeneracy that increases with system size. On a torus with dimensions , the ground state degeneracy is exactly . A similar degeneracy scaling, , is seen on other manifolds as well as in the thermodynamic limit.
Restricted mobility excitations
The X cube model hosts two types of elementary excitations, the fracton and lineon (also known as the one-dimensional particle).
If a quantum state is such that the eigenvalue of for some unit cube , then we say that, in this quantum state, there is a fracton located at the position . For example, if is a ground state of the Hamiltonian, then for any edge , the state features four fractons, one each on the cubes adjacent to .
Given a rectangle in a plane, one can define a "membrane" operator as where the product runs over all edges perpendicular to the rectangle that pass through this rectangle. Then the state features four fractons each located at the cubes next to the corners of the rectangle. Thus, an isolated fracton can appear in the limit of taking the length and width of the rectangle to infinity. The fact that a nonlocal membrane operator acts on the ground state to produce an isolated fracton is analogous to how, in smaller dimensional systems, nonlocal string operators can produce isolated flux particles and domain walls.
This construction shows that an isolated fracton cannot be mobile in any direction. In other words, there is no local operator that can be acted on an isolated fracton to move it to a different location. In order to move an individual isolated fracton, one would need to apply a highly nonlocal operator to move the entire membrane associated with it.
If a quantum state is such that the eigenvalue of for some vertex , then we say that, in this quantum state, there is a lineon located at the position that is mobile in the direction. A similar definition holds for lineons that are mobile in the direction and lineons that are mobile in the direction. In order to create an isolated lineon at a vertex , one must act on the ground state with a long string of Pauli operators acting on all the edges along the axis that are below the lineon. Lineon excitations are mobile in one direction only; the Pauli operator can act on lineons to translate them along that direction.
An and lineon can all fuse into the vacuum, if the lines on which each of them move concur. That is, there is a sequence of local operators that can make this fusion happen. The opposite process can also happen. For a similar reason, an isolated lineon can change direction of motion from to , creating a new lineon moving in the direction in the process. The new lineon is created at the point in space where the original lineon changes direction.
It is also possible to make bound states of these elementary excitations that have higher mobility. For example, consider the bound state of two fractons with the same and coordinates separated by a finite distance along the axis. This bound state, called a planeon, is mobile in all directions in the plane. One can construct a membrane operator with width in the axis and arbitrary length in either the or direction that can act on the planeon state to move it within the plane.
Interferometry
It is possible to remotely detect the presence of an isolated elementary excitation in a region by moving the opposite type of elementary excitation around it. Here, as usual, "moving" refers to the repeated action of local unitary operators that translate the particles. This process is known as interferometry. It can be considered analogous to the idea of braiding anyons in two dimensions.
For example, suppose a lineon (either an lineon or a lineon) is located in the plane, and there is also a planeon that can move in the plane. Then we can move the planeon in a full rotation that happens to encompass the position of the lineon. Such a planeon movement would be implemented by a membrane operator. If this membrane operator intersects with the Pauli- string operator attached to the lineon exactly one time, then at the end of the rotation of the planeon the wave function will pick up a factor of , which indicates the presence of the lineon.
Coupled layer construction
It is possible to construct the X cube model by taking three stacks of toric code sheets, on along each of the three axes, superimposing them, and adding couplings to the edges where they intersect. This construction explains some of the connections that can be seen between the toric code topological order and the X cube model. For example, each additional toric code sheet can be understood to contribute a topological degeneracy of 4 to the overall ground state degeneracy of the X cube model when it is placed on a three dimensional torus; this is consistent with the formula for the ground state degeneracy of the X cube model.
Checkerboard Model
Another example of a type I fracton model is the checkerboard model.
This model also lives on a cubic lattice, but with one qubit on each vertex. First, one colours the cubic unit cells with the colours and in a checkerboard pattern, i.e. such that no two adjacent cubic cells are the same colour. Then the Hamiltonian is
This model is also exactly solvable with commuting terms. The topological ground state degeneracy on a torus is given by for lattice of size (as a rule the dimensions of the lattice must be even for periodic boundary conditions to make sense).
Like the X cube model, the checkerboard model features excitations in the form of fractons, lineons, and planeons.
Type II
The paradigmatic example of a type II fracton model is Haah's code. Due to the more complicated nature of Haah's code, the generalisations to other type II models are poorly understood compared to type I models.
Haah's code
Haah's code is defined on a cubic lattice with two qubits on each vertex. We can refer to these qubits using Pauli matrices and , each acting on a separate qubit. The Hamiltonian is
.
Here, for any unit cube whose eight vertices are labeled as , , , , , , , and , the operators and are defined as
This is also an exactly solvable model, as all terms of the Hamiltonian commute with each other.
The ground state degeneracy for an torus is given by
Here, gcd denotes the greatest common divisor of the three polynomials shown, and deg refers to the degree of this common divisor. The coefficients of the polynomials belong to the finite field , consisting of the four elements of characteristic 2 (i.e. ). is a cube root of 1 that is distinct from 1. The greatest common divisor can be defined through Euclid's algorithm. This degeneracy fluctuates wildly as a function of . If is a power of 2, then according to Lucas's theorem the three polynomials take the simple forms , indicating a ground state degeneracy of . More generally, if is the largest power of 2 that divides , then the ground state degeneracy is at least and at most .
Thus the Haah's code fracton model also in some sense exhibits the property that the logarithm of the ground state degeneracy tends to scale in direct proportion to the linear dimension of the system. This appears to be a general property of gapped fracton models. Just like in type I models and in topologically ordered systems, different ground states of Haah's code cannot be distinguished by local operators.
Haah's code also features immobile elementary excitations called fractons. A quantum state is said to have a fracton located at a cube if the eigenvalue of is for this quantum state (an excitation of the operator is also a fracton. Such a fracton is physically equivalent to an excitation of because there is a unitary map exchanging and , so it suffices to consider excitations of only for this discussion).
If is a ground state of the Hamiltonian, then for any vertex , the state features four fractons in a tetrahedral arrangement, occupying four of the eight cubes adjacent to vertex (the same is true for the state , although the exact shape of the tetrahedron is different).
In an attempt to isolate just one of these four fractons, one may try to apply additional spin flips at different nearby vertices to try annihilate the three other fractons. Doing so simply results in three new fractons appearing further away. Motivated by this process, one can then identify a set of vertices in space that together form some arbitrary iteration of the three-dimensional Sierpiński fractal. Then the state
features four fractons, one each at a cube adjacent to a corner vertex of the Sierpinski tetrahedron. Thus we see that an infinitely large fractal-shaped operator is required to generate an isolated fracton out of the ground state in the Haah's code model. The fractal-shaped operator in Haah's code plays an analogous role to the membrane operators in the X-cube model.
Unlike in type I models, there are no stable bound states of a finite number of fractons that are mobile. The only mobile bound states are those such as the completely mobile four-fracton states like that are unstable (i.e. can transform into the ground state by the action of a local operator).
Foliated fracton order
One formalism used to understand the universal properties of type I fracton phases is called foliated fracton order.
Foliated fracton order establishes an equivalence relation between two systems, system and system , with Hamiltonians and . If one can transform the ground state of to the ground state of by applying a finite depth local unitary map and arbitrarily adding and/or removing two-dimensional gapped systems, then and are said to belong to the same foliated fracton order.
It is important in this definition that the local unitary map remains at finite depth as the sizes of systems 1 and 2 are taken to the thermodynamic limit. However, the number of gapped systems being added or removed can be infinite. The fact that two-dimensional topologically ordered gapped systems can be freely added or removed in the transformation process is what distinguishes foliated fracton order form more conventional notions of phases.
To state the definition more precisely, suppose one can find two (possibly empty or infinite) collections of two-dimensional gapped phases (with arbitrary topological order), and , and a finite depth local unitary map , such that maps the ground state of to the ground state of . Then and belong to the same foliated fracton order.
More conventional notions of phase equivalence fail to give sensible results when directly applied to fracton models, because they are based on the notion that two models in the same phase should have the same topological ground state degeneracy. Since the ground state degeneracy of fracton models scales with system size, these conventional definitions would imply that simply changing the system size slightly would alter the entire phase. This would make it impossible to study the phases of fracton matter in the thermodynamic limit where system size . The concept of foliated fracton order resolves this issue, by allowing degenerate subsystems ( two-dimensional gapped topological phases) to be used as "free resources" that can be arbitrarily added or removed from the system to account for these differences. If a fracton model is such that is in the same foliated fracton order as for a larger system size, then the foliated fracton order formalism is suitable for the model.
Foliated fracton order is not a suitable formalism for type II fracton models.
Known foliated fracton orders of type I models
Many of the known type I fracton model are in fact in the same foliated fracton order as the X cube Model, or in the same foliated fracton order as multiple copies of the X cube model. However, not all are. A notable known example of a distinct foliated fracton order is the twisted foliated fracton model.
Explicit local unitary maps have been constructed that demonstrate the equivalence of the X cube model with various other models, such as the Majorana checkerboard model and the semionic X cube model. The checkerboard model belongs to the same foliated fracton order as two copies of the X cube model.
Invariants of foliated fracton order
Just like how topological orders tend to have various invariant quantities that represent topological signatures, one can also attempt to identify invariants of foliated fracton orders.
Conventional topological orders often exhibit ground state degeneracy which is dependent only on the topology of the manifold on which the system is embedded. Fracton models do not have this property, because the ground state degeneracy also depends on system size. Furthermore, in foliated fracton models the ground state degeneracy can also depend on the intricacies of the foliation structure used to construct it. In other words, the same type of model on the same manifold with the same system size may have different ground state degeneracies depending on the underlying choice of foliation.
Quotient superselection sectors
By definition, the number of superselection sectors in a fracton model is infinite (i.e. scales with system size). For example, each individual fracton belongs to its own superselection sector, as there is no local operator that can transform it to any other fracton at a different position.
However, a loosening of the concept of superselection sector, known as the quotient superselection sector, effectively ignores two-dimensional particles (e.g. planeon bound states) which are presumed to come from two-dimensional foliating layers. Foliated fracton models then tend to have a finite list of quotient superselection sectors describing the types of fractional excitations present in the model. This is analogous to how topological orders tend to have a finite list of ordinary superselection sectors.
Entanglement Entropy
Generally for fracton models in the ground state, when considering the entanglement entropy of a subregion of space with large linear size , the leading order contribution to the entropy is proportional to , as expected for a gapped three dimensional system obeying an area law. However, the entanglement entropy also has subleading terms as a function of that reflect hidden nonlocal contributions. For example, the subleading correction represents a contribution from the constant topological entanglement entropy of each of the 2D topologically ordered layers present in the foliation structure of the system.
Since foliated fracton order is invariant even when disentangling such 2D gapped layers, an entanglement signature of a foliated fracton order must be able to ignore of the entropy contributions both from local details and from 2D topologically ordered layers.
It is possible to use a mutual information calculation to extract a contribution to entanglement entropy that is unique to the foliated fracton order. Effectively, this is done by adding and subtracting entanglement entropies of different regions in such a way as to get rid of local contributions as well as contributions from 2D gapped layers.
Symmetric tensor gauge theory
The immobility of fractons in symmetric tensor gauge theory can be understood as a generalization of electric charge conservation resulting from a modified Gauss's law. Various formulations and constraints of symmetric tensor gauge theory tend to result in conservation laws that imply the existence of restricted-mobility particles.
U(1) scalar charge model
For example, in the U(1) scalar charge model, the fracton charge density () is related to a symmetric electric field tensor (, a theoretical generalization of the usual electric vector field) via , where the repeated spatial indices are implicitly summed over.
Both the fracton charge () and dipole moment () can be shown to be conserved:
When integrating by parts, we have assumed that there is no electric field at spatial infinity.
Since the total fracton charge and dipole moment is zero under this assumption, this implies that the charge and dipole moment is conserved.
Because moving an isolated charge changes the total dipole moment, this implies that isolated charges are immobile in this theory.
However, two oppositely charged fractons, which forms a fracton dipole, can move freely since this does not change the dipole moment.
One approach to constructing an explicit action for scalar fractonic matter fields and their coupling to the symmetric tensor gauge theory is the following. Suppose the scalar fractonic matter field is . A global charge conservation symmetry would imply that the action is symmetric under the transformation for some spatially uniform real , as is the case in usual charged theories. A global dipole moment conservation symmetry would imply that the action is symmetric under the transformation for an arbitrary real spatially uniform vector .
The simplest kinetic terms (i.e. terms featuring the spatial derivative) that are symmetric under these transformations are quartic in .
Now when gauging this symmetry, the kinetic expression gets replaced with , where is a symmetric tensor that transforms under arbitrary gauge transformations as . This shows how a symmetric tensor field couples to scalar fractonic matter fields.
U(1) vector charge model
The U(1) scalar charge theory is not the only symmetric tensor gauge theory that is gives rise to limited mobility particles. Another example is the U(1) vector charge theory.
In this theory, the fractonic charge is a vector quantity . The symmetric tensor gauge field transforms under gauge transformations as . The Gauss law for this theory takes the form , which implies both a total charge conservation and a conservation of total angular charge moment . The latter conservation law implies that isolated charges are restricted to move parallel to their corresponding charge vectors. Thus these particles appear to be similar to the lineons in Type I fractons, except here they are in a gapless theory.
Applications
Fractons were originally studied as an analytically tractable realization of quantum glassiness where the immobility of isolated fractons results in a slow relaxation rate.
This immobility has also been shown to be capable of producing a partially self-correcting quantum memory, which could be useful for making an analog of a hard drive for a quantum computer.
Fractons have also been shown to appear in quantum linearized gravity models
and (via a duality) as disclination crystal defects.
However, aside from the duality to crystal defects, and although it has been shown to be possible in principle,
other experimental realizations of gapped fracton models have not yet been realized. On the other hand, there has been progress in studying the dynamics of dipole-conserving systems, both theoretically and experimentally, which exhibit the characteristic slow dynamics expected of systems with fractonic behavior.
Fracton models
It has been conjectured that many type-I models are examples of foliated fracton phases; however, it remains unclear whether non-Abelian fracton models can be understood within the foliated framework.
References
External links
(Seiberg's talk begins at 17:10 of 4:40:39 in the video.)
Quasiparticles | Fracton (subdimensional particle) | [
"Physics",
"Materials_science"
] | 4,624 | [
"Quasiparticles",
"Subatomic particles",
"Condensed matter physics",
"Matter"
] |
34,045,827 | https://en.wikipedia.org/wiki/Nicolas%20J.%20Cerf | Nicolas Jean Cerf (born 1965) is a Belgian physicist. He is professor of quantum mechanics and information theory at the Université Libre de Bruxelles and a member of the Royal Academies for Science and the Arts of Belgium. He received his Ph.D. at the Université Libre de Bruxelles in 1993, and was a researcher at the Université de Paris 11 and the California Institute of Technology. He is the director of the Center for Quantum Information and Computation at the
Université Libre de Bruxelles.
Research
Together with Christoph Adami, he defined the quantum version of conditional and mutual entropies, which are basic notions of Shannon's information theory, and discovered that quantum information can be negative (a pair of entangled particles was coined a qubit-antiqubit pair). This has led to important results in quantum information sciences, for example quantum state merging. He is best known today for his work on quantum information with continuous variables. He found a Gaussian quantum cloning transformation (see no-cloning theorem) and invented a Gaussian quantum key distribution protocol, which is the continuous counterpart of the so-called BB84 protocol, making a link with Shannon's theory of Gaussian channels. This has led to the first experimental demonstration of continuous-variable quantum key distribution with optical coherent states and homodyne detection.
Honors
He received the Caltech President’s Fund Award in 1997, and the Marie Curie Excellence Award in 2006.
Works
References
External links
Living people
Vrije Universiteit Brussel alumni
1965 births
Academic staff of the Université libre de Bruxelles
Belgian physicists
Theoretical physicists | Nicolas J. Cerf | [
"Physics"
] | 333 | [
"Theoretical physics",
"Theoretical physicists"
] |
34,049,390 | https://en.wikipedia.org/wiki/Partial-wave%20analysis | Partial-wave analysis, in the context of quantum mechanics, refers to a technique for solving scattering problems by decomposing each wave into its constituent angular-momentum components and solving using boundary conditions.
Preliminary scattering theory
The following description follows the canonical way of introducing elementary scattering theory. A steady beam of particles scatters off a spherically symmetric potential , which is short-ranged, so that for large distances , the particles behave like free particles. In principle, any particle should be described by a wave packet, but we instead describe the scattering of a plane wave traveling along the z axis, since wave packets can be expanded in terms of plane waves, and this is mathematically simpler. Because the beam is switched on for times long compared to the time of interaction of the particles with the scattering potential, a steady state is assumed. This means that the stationary Schrödinger equation for the wave function representing the particle beam should be solved:
We make the following ansatz:
where is the incoming plane wave, and is a scattered part perturbing the original wave function.
It is the asymptotic form of that is of interest, because observations near the scattering center (e.g. an atomic nucleus) are mostly not feasible, and detection of particles takes place far away from the origin. At large distances, the particles should behave like free particles, and should therefore be a solution to the free Schrödinger equation. This suggests that it should have a similar form to a plane wave, omitting any physically meaningless parts. We therefore investigate the plane-wave expansion:
The spherical Bessel function asymptotically behaves like
This corresponds to an outgoing and an incoming spherical wave. For the scattered wave function, only outgoing parts are expected. We therefore expect at large distances and set the asymptotic form of the scattered wave to
where is the so-called scattering amplitude, which is in this case only dependent on the elevation angle and the energy.
In conclusion, this gives the following asymptotic expression for the entire wave function:
Partial-wave expansion
In case of a spherically symmetric potential , the scattering wave function may be expanded in spherical harmonics, which reduce to Legendre polynomials because of azimuthal symmetry (no dependence on ):
In the standard scattering problem, the incoming beam is assumed to take the form of a plane wave of wave number , which can be decomposed into partial waves using the plane-wave expansion in terms of spherical Bessel functions and Legendre polynomials:
Here we have assumed a spherical coordinate system in which the axis is aligned with the beam direction. The radial part of this wave function consists solely of the spherical Bessel function, which can be rewritten as a sum of two spherical Hankel functions:
This has physical significance: asymptotically (i.e. for large ) behaves as and is thus an outgoing wave, whereas asymptotically behaves as and is thus an incoming wave. The incoming wave is unaffected by the scattering, while the outgoing wave is modified by a factor known as the partial-wave S-matrix element :
where is the radial component of the actual wave function. The scattering phase shift is defined as half of the phase of :
If flux is not lost, then , and thus the phase shift is real. This is typically the case, unless the potential has an imaginary absorptive component, which is often used in phenomenological models to simulate loss due to other reaction channels.
Therefore, the full asymptotic wave function is
Subtracting yields the asymptotic outgoing wave function:
Making use of the asymptotic behavior of the spherical Hankel functions, one obtains
Since the scattering amplitude is defined from
it follows that
and thus the differential cross section is given by
This works for any short-ranged interaction. For long-ranged interactions (such as the Coulomb interaction), the summation over may not converge. The general approach for such problems consist in treating the Coulomb interaction separately from the short-ranged interaction, as the Coulomb problem can be solved exactly in terms of Coulomb functions, which take on the role of the Hankel functions in this problem.
References
See also
Levinson's theorem
External links
Partial Wave Analysis for Dummies
Partial Wave Analysis of Scattering
Quantum mechanics
Scattering theory | Partial-wave analysis | [
"Physics",
"Chemistry"
] | 886 | [
"Scattering theory",
"Theoretical physics",
"Scattering stubs",
"Quantum mechanics",
"Scattering",
"Quantum physics stubs"
] |
34,049,829 | https://en.wikipedia.org/wiki/Simplicial%20honeycomb | In geometry, the simplicial honeycomb (or -simplex honeycomb) is a dimensional infinite series of honeycombs, based on the affine Coxeter group symmetry. It is represented by a Coxeter-Dynkin diagram as a cyclic graph of nodes with one node ringed. It is composed of -simplex facets, along with all rectified -simplices. It can be thought of as an -dimensional hypercubic honeycomb that has been subdivided along all hyperplanes , then stretched along its main diagonal until the simplices on the ends of the hypercubes become regular. The vertex figure of an -simplex honeycomb is an expanded -simplex.
In 2 dimensions, the honeycomb represents the triangular tiling, with Coxeter graph filling the plane with alternately colored triangles. In 3 dimensions it represents the tetrahedral-octahedral honeycomb, with Coxeter graph filling space with alternately tetrahedral and octahedral cells. In 4 dimensions it is called the 5-cell honeycomb, with Coxeter graph , with 5-cell and rectified 5-cell facets. In 5 dimensions it is called the 5-simplex honeycomb, with Coxeter graph , filling space by 5-simplex, rectified 5-simplex, and birectified 5-simplex facets. In 6 dimensions it is called the 6-simplex honeycomb, with Coxeter graph , filling space by 6-simplex, rectified 6-simplex, and birectified 6-simplex facets.
By dimension
Projection by folding
The (2n-1)-simplex honeycombs and 2n-simplex honeycombs can be projected into the n-dimensional hypercubic honeycomb by a geometric folding operation that maps two pairs of mirrors into each other, sharing the same vertex arrangement:
Kissing number
These honeycombs, seen as tangent n-spheres located at the center of each honeycomb vertex have a fixed number of contacting spheres and correspond to the number of vertices in the vertex figure. This represents the highest kissing number for 2 and 3 dimensions, but falls short on higher dimensions. In 2-dimensions, the triangular tiling defines a circle packing of 6 tangent spheres arranged in a regular hexagon, and for 3 dimensions there are 12 tangent spheres arranged in a cuboctahedral configuration. For 4 to 8 dimensions, the kissing numbers are 20, 30, 42, 56, and 72 spheres, while the greatest solutions are 24, 40, 72, 126, and 240 spheres respectively.
See also
Hypercubic honeycomb
Alternated hypercubic honeycomb
Quarter hypercubic honeycomb
Truncated simplicial honeycomb
Omnitruncated simplicial honeycomb
References
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
Branko Grünbaum, Uniform tilings of 3-space. Geombinatorics 4(1994), 49 - 56.
Norman Johnson Uniform Polytopes, Manuscript (1991)
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition,
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings)
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
Honeycombs (geometry)
Uniform polytopes | Simplicial honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 831 | [
"Uniform polytopes",
"Honeycombs (geometry)",
"Tessellation",
"Crystallography",
"Symmetry"
] |
34,051,196 | https://en.wikipedia.org/wiki/Organic%20photorefractive%20materials | Organic photorefractive materials are materials that exhibit a temporary change in refractive index when exposed to light. The changing refractive index causes light to change speed throughout the material and produce light and dark regions in the crystal. The buildup can be controlled to produce holographic images for use in biomedical scans and optical computing. The ease with which the chemical composition can be changed in organic materials makes the photorefractive effect more controllable.
History
Although the physics behind the photorefractive effect were known for quite a while, the effect was first observed in 1967 in LiNbO3. For more than thirty years, the effect was observed and studied exclusively in inorganic materials, until 1990, when a nonlinear organic crystal 2-(cyclooctylamino)-5-nitropyridine (COANP) doped with 7,7,8,8-tetracyanoquinodimethane (TCNQ) exhibited the photorefractive effect. Even though inorganic material-based electronics dominate the current market, organic PR materials have been improved greatly since then and are currently considered to be an equal alternative to inorganic crystals.
Theory
There are two phenomena that, when combined together, produce the photorefractive effect. These are photoconductivity, first observed in selenium by Willoughby Smith in 1873, and the Pockels Effect, named after Friedrich Carl Alwin Pockels who studied it in 1893.
Photoconductivity is the property of a material that describes the capability of incident light of adequate wavelength to produce electric charge carriers. The Fermi level of an intrinsic semiconductor is exactly in the middle of the band gap. The densities of free electrons n in the conduction band and free holes h in the valence band can be found through equations:
and
where Nc and Nv are the densities of states at the bottom of the conduction band and the top of the valence band, respectively, Ec and Ev are the corresponding energies, EF is the Fermi level, kB is the Boltzmann constant and T is the absolute temperature. Addition of impurities into the semiconductor, or doping, produces excess holes or electrons, which, with sufficient density, may pin the Fermi level to the impurities' position.
A sufficiently energetic light can excite charge carriers so much that they will populate the initially empty localized levels. Then, the density of free carriers in the conduction and/or the valence band will increase. To account for these changes, steady-state Fermi levels are defined for electrons to be EFn and, for holes, EFp. The densities n and h are, then equal to
The localized states between EFn and EFp are known as 'photoactive centers'. The charge carriers remain in these states for a long time until they recombine with an oppositely charged carrier. The states outside the EFn − EFp energy, however, relax their charge carriers to the nearest extended states.
The effect of incident light on the conductivity of the material depends on the energy of light and material. Differently-doped materials may have several different types of photoactive centers, each of which requires a different mathematical treatment. However, it is not very difficult to show the relationship between incident light and conductivity in a material with only one type of charge carrier and one type of a photoactive center. The dark conductivity of such a material is given by
where σd is the conductivity, e = elementary charge, Nd and N are the densities of total photoactive centers and ionized empty electron acceptor states, respectively, β is the thermal photoelectron generation coefficient, μ is the mobility constant and τ is the photoelectron lifetime. The equation for photoconductivity substitutes the parameters of the incident light for β and is
in which s is the effective cross-section for photoelectron generation, h is the Planck constant, ν is the frequency of incident light, and the term I = I0e−αz in which I0 is the incident irradiance, z is the coordinate along the crystal thickness and α is the light intensity loss coefficient.
The electro-optic effect is a change of the optical properties of a given material in response to an electric field. There are many different occurrences, all of which are in the subgroup of the electro-optic effect, and Pockels effect is one of these occurrences. Essentially, the Pockels effect is the change of the material's refractive index induced by an applied electric field.
The refractive index of a material is the factor by which the phase velocity is decreased relative to the velocity of light in vacuum. At a microscale, such a decrease occurs because of a disturbance in the charges of each atom after being subjected to the electromagnetic field of the incident light. As the electrons move around energy levels, some energy is released as an electromagnetic wave at the same frequency but with a phase delay. The apparent light in a medium is a superposition of all of the waves released in such way, and so the resulting light wave has shorter wavelength but the same frequency and the light wave's phase speed is slowed down.
Whether the material will exhibit Pockels effect depends on its symmetry. Both centrosymmetric and non-centrosymmetric media will exhibit an effect similar to Pockels, the Kerr effect. The refractive index change will be proportional to the square of the electric field strength and will therefore be much weaker than the Pockels effect. It is only the non-centrosymmetric materials that can exhibit the Pockels effect: for instance, lithium tantalite (trigonal crystal) or gallium arsenide (zinc-blende crystal); as well as poled polymers with specifically designed organic molecules.
It is possible to describe the Pockels effect mathematically by first introducing the index ellipsoid – a concept relating the orientation and relative magnitude of the material's refractive indices. The ellipsoid is defined by
in which εi is the relative permittivity along the x, y, or z axis, and R is the reduced displacement vector defined as Di/ in which Di is the electric displacement vector and W is the field energy. The electric field will induce a deformation in Ri as according to:
in which E is the applied electric field, and rij is a coefficient that depends on the crystal symmetry and the orientation of the coordinate system with respect to the crystal axes. Some of these coefficients will usually be equal to zero.
Organic photorefractive materials
In general, photorefractive materials can be classified into the following categories, the border between categories may not be sharp in each case
Inorganic crystal and compound semiconductor
Multiple quantum well structures
Organic crystalline materials
Polymer dispersed Liquid crystalline materials (PDLC)
Organic amorphous materials
In the field of this research, initial investigations were mainly carried out with inorganic semiconductors. There have been huge varieties of inorganic crystals such as BaTiO3, KNbO3, LiNbO3 and inorganic compound semiconductors such as GaAs, InP, CdTe are reported in literature.
First photorefractive (PR) effect in organic materials was reported in 1991 and then, research of organic photorefractive materials has drawn major attention in recent years compare to inorganic PR semiconductors. This is due to mainly cost effectiveness, relatively easy synthetic procedure, and tunable properties through modifications of chemical or compositional changes.
Polymer or polymer composite materials have shown excellent photorefractive properties of 100% diffraction efficiency. Most recently, amorphous composites of low glass transition temperature have emerged as highly efficient PR materials. These two classes of organic PR materials are also mostly investigated field.
These composite materials have four components -conducting materials, sensitizer, chromophore, and other dopant molecules to be discussed in terms of PR effect. According to the literature, design strategy of hole conductors is mainly p-type based and the issues on the sensitizing are accentuated on n-type electron accepting materials, which are usually of very low content in the blends and thus do not provide a complementary path for electron conduction.
In recent publications on organic PR materials, it is common to incorporate a polymeric material with charge transport units in its main or side chain. In this way, the polymer also serves as a host matrix to provide the resultant composite material with a sufficient viscosity for reasons of processing. Most guest-host composites demonstrated in the literature so far are based on hole conducting polymeric materials.
The vast majority of the polymers are based on carbazole containing polymers like poly-(N-vinyl carbazole) (PVK) and polysiloxanes (PSX). PVK is the well studied system for huge varieties of applications.
In polymers, charge is transported through the HOMO and the mobility is influenced by the nature of the dopant mixed into the polymer, also it depends on the amount of dopant which may exceed 50 weight percent of the composite for guest-host materials.
The mobility decreases as the concentration of charge-transport moieties decreases, and the dopant's polarity and concentration increases.
Besides the mobility, the ionization potential of the polymer and the respective dopant has also significant importance. The relative position of the polymer HOMO with respect to the ionization potential of the other components of the blends determines the extent of extrinsic hole traps in the material.
TPD (tetraphenyldiaminophenyl) based materials are known to exhibit higher charge carrier mobilities and lower ionization potentials compare to carbazole based (PVK) materials. The low ionization potentials of the TPD based materials greatly enhance the photoconductivity of the materials. This is partly due to the enhanced complexation of the hole conductor, which is an electron donor, with the sensitizing agents, which is an electron acceptor.
It was reported a dramatic increase of the photogeneration efficiency from 0.3% to 100% by lowering the ionization potential from 5.90 eV (PVK) to 5.39 eV ( TPD derivative PATPD). This is schematically explained in the diagram using the electronic states of PVK and PATPD.
Applications
As of 2011, no commercial products utilizing organic photorefractive materials exist. All applications described are speculative or performed in research laboratories. Large DC fields required to produce holograms lead to dielectric breakdown not suitable outside the laboratory.
Reusable Holographic Displays
Many materials exist for recording static, permanent holograms including photopolymers, silver halide films, photoresists, dichromated gelatin, and photorefractives. Materials vary in their maximum diffraction efficiency, required power consumption, and resolution. Photorefractives have a high diffraction efficiency, an average-low power consumption, and a high resolution.
Updatable holograms that do not require glasses are attractive for medical and military imaging. The materials properties required to produce updatable holograms are 100% diffraction efficiency, fast writing time, long image persistence, fast erasing time, and large area. Inorganic materials capable of rapid updating exist but are difficult to grow larger than a cubic centimeter. Liquid crystal 3D displays exist but require complex computation to produce images which limits their refresh rate and size.
Blanche et al. demonstrated in 2008 a 4 in x 4 in display that refreshed every few minutes and lasted several hours. Organic photorefractive materials are capable of kHz refresh rates though it is limited by material sensitivity and laser power. Material sensitivity demonstrated in 2010 require kW pulsed lasers.
Tunable color filter
White light passed through an organic photorefractive diffraction grating, leads to the absorption of wavelengths generated by surface plasmon resonance and the reflection of complementary wavelengths. The period of the diffraction grating may be adjusted by modifying to control the wavelengths of the reflected light. This could be used for filter channels, optical attenuators, and optical color filters
Optical communications
Free-space optical communications (FSO) can be used for high-bandwidth communication of data by utilizing high frequency lasers. Phase distortions created by the atmosphere can be corrected by a four-wave mixing process utilizing organic photorefractive holograms. The nature of FSO allows images to be transmitted at near original quality in real-time. The correction also corrects for moving images.
Image and signal processing
Organic photorefractive materials are a nonlinear medium in which large amounts of information can be recorded and read. Holograms due to the inherent parallel nature of optical recording are able to quickly process large amounts of data. Holograms that can be quickly produced and read can be used to verify the authenticity of documents similar to a watermark Organic photorefractive correlators use matched filter and Joint Fourier Transform configurations.
Logical functions (AND, OR, NOR, XOR, NOT) were carried out using two-wave signal processing. High diffraction efficiency allowed a CCD detector to distinguish between light pixels (1 bit) and dark pixels (0 bits).
References
Holography
Nonlinear optical materials
Organic semiconductors
Semiconductor material types | Organic photorefractive materials | [
"Chemistry"
] | 2,735 | [
"Semiconductor material types",
"Semiconductor materials",
"Molecular electronics",
"Organic semiconductors"
] |
34,052,082 | https://en.wikipedia.org/wiki/RAPIEnet | RAPIEnet (Real-time Automation Protocols for Industrial Ethernet) was Korea's first Ethernet international standard for real-time data transmission. It is an Ethernet-based industrial networking protocol, developed in-house by LSIS offers real-time transmission and is registered as an international standard. (IEC 61158-3-21: 2010, IEC 61158-4-21: 2010, IEC 61158-5-21: 2010, IEC 61158-6-21: 2010, IEC 61784-2: 2010, IEC 62439-7)
Features
An embedded Ethernet switch with two ports enables the network expansion in a daisy chain without the need for an additional external switch, easy installation and wiring reduction.
100 Mbit/s - 1 Gbit/s transmission speed, allowing electrical and optical media to be used together.
Supports transmission modes such as Unicast, Multicast, and Broadcast.
Supports "Store & Forward”and “Cut Through” switching.
RAPIEnet Technology
Protocol Stack Structure
Embedded dual port switch motion
An embedded hardware-based switch is adopted for real-time data transmission.
With the full-duplex communication support, each node has dual link routes in a ring topology.
Frame Format
RAPIEnet Ether type: 0x88FE
Topology
Recovery System
With an embedded switch and full-duplex, it has dual link routes and communication fault tolerance, enabling fast recovery capabilities.
- Recovery time < 10 ms
Transmits signal from Device 1 to Device 3.
A fault occurs between Device 2 and Device 3.
Notify the fault from Device 2 to Device 1.
Transmits signal back from Device 1 to Device 3.
Flexible Hybrid Structure
Fiber Optics/Copper Media
- Copper: Low installation costs with relatively big noise.
- Optics: High installation costs with low noise and relatively long wiring.
Simple and efficient wiring is available by combining the features of two wires that have advantages and disadvantages.
System Diagram Using RAPIEnet
Acquired Standards
International Standards
IEC 61158-3-21: 2010, Industrial communication networks - Fieldbus specifications - Part 3-21: Data-link layer service definition - Type 21 elements.
IEC 61158-4-21: 2010, Industrial communication networks - Fieldbus specifications - Part 4-21: Data-link layer protocol specification - Type 21 elements.
IEC 61158-5-21: 2010, Industrial communication networks - Fieldbus specifications - Part 5-21: Application layer service definition - Type 21 elements.
IEC 61158-6-21: 2010, Industrial communication networks - Fieldbus specifications - Part 6-21: Application layer protocol specification - Type 21 elements.
IEC 61784-2: 2010, Industrial communication networks - Profiles - Part 2: Additional fieldbus profiles for real-time networks based on ISO/IEC 8802-3.
IEC 62439-7, Industrial communication networks - High availability automation networks - Part 7: Ring-based Redundancy Protocol (RRP)
Others
Other international standards in process
IEC 61784-5-17, Industrial communication networks - Profiles - Part 5-17: Installation of fieldbuses - Installation profiles for CPF 17 (to be registered as an IEC international standard in 2012)
References
Industrial Ethernet | RAPIEnet | [
"Engineering"
] | 662 | [
"Industrial Ethernet"
] |
34,052,377 | https://en.wikipedia.org/wiki/Cyclotruncated%20simplicial%20honeycomb | In geometry, the cyclotruncated simplicial honeycomb (or cyclotruncated n-simplex honeycomb) is a dimensional infinite series of honeycombs, based on the symmetry of the affine Coxeter group. It is given a Schläfli symbol t0,1{3[n+1]}, and is represented by a Coxeter-Dynkin diagram as a cyclic graph of n+1 nodes with two adjacent nodes ringed. It is composed of n-simplex facets, along with all truncated n-simplices.
It is also called a Kagome lattice in two and three dimensions, although it is not a lattice.
In n-dimensions, each can be seen as a set of n+1 sets of parallel hyperplanes that divide space. Each hyperplane contains the same honeycomb of one dimension lower.
In 1-dimension, the honeycomb represents an apeirogon, with alternately colored line segments. In 2-dimensions, the honeycomb represents the trihexagonal tiling, with Coxeter graph . In 3-dimensions it represents the quarter cubic honeycomb, with Coxeter graph filling space with alternately tetrahedral and truncated tetrahedral cells. In 4-dimensions it is called a cyclotruncated 5-cell honeycomb, with Coxeter graph , with 5-cell, truncated 5-cell, and bitruncated 5-cell facets. In 5-dimensions it is called a cyclotruncated 5-simplex honeycomb, with Coxeter graph , filling space by 5-simplex, truncated 5-simplex, and bitruncated 5-simplex facets. In 6-dimensions it is called a cyclotruncated 6-simplex honeycomb, with Coxeter graph , filling space by 6-simplex, truncated 6-simplex, bitruncated 6-simplex, and tritruncated 6-simplex facets.
Projection by folding
The cyclotruncated (2n+1)- and 2n-simplex honeycombs and (2n-1)-simplex honeycombs can be projected into the n-dimensional hypercubic honeycomb by a geometric folding operation that maps two pairs of mirrors into each other, sharing the same vertex arrangement:
See also
Hypercubic honeycomb
Alternated hypercubic honeycomb
Quarter hypercubic honeycomb
Simplectic honeycomb
Omnitruncated simplicial honeycomb
References
George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs)
Branko Grünbaum, Uniform tilings of 3-space. Geombinatorics 4(1994), 49 - 56.
Norman Johnson Uniform Polytopes, Manuscript (1991)
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition,
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings)
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
Honeycombs (geometry)
Polytopes
Truncated tilings | Cyclotruncated simplicial honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 773 | [
"Honeycombs (geometry)",
"Truncated tilings",
"Tessellation",
"Crystallography",
"Symmetry"
] |
39,510,164 | https://en.wikipedia.org/wiki/Mutagenesis%20%28molecular%20biology%20technique%29 | In molecular biology, mutagenesis is an important laboratory technique whereby DNA mutations are deliberately engineered to produce libraries of mutant genes, proteins, strains of bacteria, or other genetically modified organisms. The various constituents of a gene, as well as its regulatory elements and its gene products, may be mutated so that the functioning of a genetic locus, process, or product can be examined in detail. The mutation may produce mutant proteins with interesting properties or enhanced or novel functions that may be of commercial use. Mutant strains may also be produced that have practical application or allow the molecular basis of a particular cell function to be investigated.
Many methods of mutagenesis exist today. Initially, the kind of mutations artificially induced in the laboratory were entirely random using mechanisms such as UV irradiation. Random mutagenesis cannot target specific regions or sequences of the genome; however, with the development of site-directed mutagenesis, more specific changes can be made. Since 2013, development of the CRISPR/Cas9 technology, based on a prokaryotic viral defense system, has allowed for the editing or mutagenesis of a genome in vivo. Site-directed mutagenesis has proved useful in situations that random mutagenesis is not. Other techniques of mutagenesis include combinatorial and insertional mutagenesis. Mutagenesis that is not random can be used to clone DNA, investigate the effects of mutagens, and engineer proteins. It also has medical applications such as helping immunocompromised patients, research and treatment of diseases including HIV and cancers, and curing of diseases such as beta thalassemia.
Random mutagenesis
Early approaches to mutagenesis relied on methods which produced entirely random mutations. In such methods, cells or organisms are exposed to mutagens such as UV radiation or mutagenic chemicals, and mutants with desired characteristics are then selected. Hermann Muller discovered in 1927 that X-rays can cause genetic mutations in fruit flies, and went on to use the mutants he created for his studies in genetics. For Escherichia coli, mutants may be selected first by exposure to UV radiation, then plated onto an agar medium. The colonies formed are then replica-plated, one in a rich medium, another in a minimal medium, and mutants that have specific nutritional requirements can then be identified by their inability to grow in the minimal medium. Similar procedures may be repeated with other types of cells and with different media for selection.
A number of methods for generating random mutations in specific proteins were later developed to screen for mutants with interesting or improved properties. These methods may involve the use of doped nucleotides in oligonucleotide synthesis, or conducting a PCR reaction in conditions that enhance misincorporation of nucleotides (error-prone PCR), for example by reducing the fidelity of replication or using nucleotide analogues. A variation of this method for integrating non-biased mutations in a gene is sequence saturation mutagenesis. PCR products which contain mutation(s) are then cloned into an expression vector and the mutant proteins produced can then be characterised.
In animal studies, alkylating agents such as N-ethyl-N-nitrosourea (ENU) have been used to generate mutant mice. Ethyl methanesulfonate (EMS) is also often used to generate animal, plant, and virus mutants.
In a European Union law (as 2001/18 directive), this kind of mutagenesis may be used to produce GMOs but the products are exempted from regulation: no labeling, no evaluation.
Site-directed mutagenesis
Prior to the development site-directed mutagenesis techniques, all mutations made were random, and scientists had to use selection for the desired phenotype to find the desired mutation. Random mutagenesis techniques has an advantage in terms of how many mutations can be produced; however, while random mutagenesis can produce a change in single nucleotides, it does not offer much control as to which nucleotide is being changed. Many researchers therefore seek to introduce selected changes to DNA in a precise, site-specific manner. Early attempts uses analogs of nucleotides and other chemicals were first used to generate localized point mutations. Such chemicals include aminopurine, which induces an AT to GC transition, while nitrosoguanidine, bisulfite, and N4-hydroxycytidine may induce a GC to AT transition. These techniques allow specific mutations to be engineered into a protein; however, they are not flexible with respect to the kinds of mutants generated, nor are they as specific as later methods of site-directed mutagenesis and therefore have some degree of randomness. Other technologies such as cleavage of DNA at specific sites on the chromosome, addition of new nucleotides, and exchanging of base pairs it is now possible to decide where mutations can go.
Current techniques for site-specific mutation originates from the primer extension technique developed in 1978. Such techniques commonly involve using pre-fabricated mutagenic oligonucleotides in a primer extension reaction with DNA polymerase. This methods allows for point mutation or deletion or insertion of small stretches of DNA at specific sites. Advances in methodology have made such mutagenesis now a relatively simple and efficient process.
Newer and more efficient methods of site directed mutagenesis are being constantly developed. For example, a technique called "Seamless ligation cloning extract" (or SLiCE for short) allows for the cloning of certain sequences of DNA within the genome, and more than one DNA fragment can be inserted into the genome at once.
Site directed mutagenesis allows the effect of specific mutation to be investigated. There are numerous uses; for example, it has been used to determine how susceptible certain species were to chemicals that are often used In labs. The experiment used site directed mutagenesis to mimic the expected mutations of the specific chemical. The mutation resulted in a change in specific amino acids and the effects of this mutation were analyzed.
The site-directed approach may be done systematically in such techniques as alanine scanning mutagenesis, whereby residues are systematically mutated to alanine in order to identify residues important to the structure or function of a protein. Another comprehensive approach is site saturation mutagenesis where one codon or a set of codons may be substituted with all possible amino acids at the specific positions.
Combinatorial mutagenesis
Combinatorial mutagenesis is a site-directed protein engineering technique whereby multiple mutants of a protein can be simultaneously engineered based on analysis of the effects of additive individual mutations. It provides a useful method to assess the combinatorial effect of a large number of mutations on protein function. Large numbers of mutants may be screened for a particular characteristic by combinatorial analysis. In this technique, multiple positions or short sequences along a DNA strand may be exhaustively modified to obtain a comprehensive library of mutant proteins. The rate of incidence of beneficial variants can be improved by different methods for constructing mutagenesis libraries. One approach to this technique is to extract and replace a portion of the DNA sequence with a library of sequences containing all possible combinations at the desired mutation site. The content of the inserted segment can include sequences of structural significance, immunogenic property, or enzymatic function. A segment may also be inserted randomly into the gene in order to assess structural or functional significance of a particular part of a protein.
Insertional mutagenesis
The insertion of one or more base pairs, resulting in DNA mutations, is also known as insertional mutagenesis. Engineered mutations such as these can provide important information in cancer research, such as mechanistic insights into the development of the disease. Retroviruses and transposons are the chief instrumental tools in insertional mutagenesis. Retroviruses, such as the mouse mammory tumor virus and murine leukemia virus, can be used to identify genes involved in carcinogenesis and understand the biological pathways of specific cancers. Transposons, chromosomal segments that can undergo transposition, can be designed and applied to insertional mutagenesis as an instrument for cancer gene discovery. These chromosomal segments allow insertional mutagenesis to be applied to virtually any tissue of choice while also allowing for more comprehensive, unbiased depth in DNA sequencing.
Researchers have found four mechanisms of insertional mutagenesis that can be used on humans. the first mechanism is called enhancer insertion. Enhancers boost transcription of a particular gene by interacting with a promoter of that gene. This particular mechanism was first used to help severely immunocompromised patients I need of bone marrow. Gammaretroviruses carrying enhancers were then inserted into patients. The second mechanism is referred to as promoter insertion. Promoters provide our cells with the specific sequences needed to begin translation. Promoter insertion has helped researchers learn more about the HIV virus. The third mechanism is gene inactivation. An example of gene inactivation is using insertional mutagenesis to insert a retrovirus that disrupts the genome of the T cell in leukemia patients and giving them a specific antigen called CAR allowing the T cells to target cancer cells. The final mechanisms is referred to as mRNA 3' end substitution. Our genes occasionally undergo point mutations causing beta-thalassemia that interrupts red blood cell function. To fix this problem the correct gene sequence for the red blood cells are introduced and a substitution is made.
Homologous recombination
Homologous recombination can be used to produce specific mutation in an organism. Vector containing DNA sequence similar to the gene to be modified is introduced to the cell, and by a process of recombination replaces the target gene in the chromosome. This method can be used to introduce a mutation or knock out a gene, for example as used in the production of knockout mice.
CRISPR
Since 2013, the development of CRISPR-Cas9 technology has allowed for the efficient introduction of different types of mutations into the genome of a wide variety of organisms. The method does not require a transposon insertion site, leaves no marker, and its efficiency and simplicity has made it the preferred method for genome editing.
Gene synthesis
As the cost of DNA oligonucleotide synthesis falls, artificial synthesis of a complete gene is now a viable method for introducing mutations into a gene. This method allows for extensive mutation at multiple sites, including the complete redesign of the codon usage of a gene to optimise it for a particular organism.
See also
Genetic engineering
Oncomouse
Saturated mutagenesis
Directed evolution
References
External links
Genetically modified organisms
Molecular biology techniques | Mutagenesis (molecular biology technique) | [
"Chemistry",
"Engineering",
"Biology"
] | 2,203 | [
"Molecular biology techniques",
"Genetic engineering",
"Genetically modified organisms",
"Molecular biology"
] |
39,516,039 | https://en.wikipedia.org/wiki/StockTwits | Stocktwits is a social media platform designed for sharing ideas between investors, traders, and entrepreneurs. Founded in 2008 by Howard Lindzon and Soren McBeth, it introduced the use of the cashtag, a way to group discussions around a stock symbol preceded by a dollar sign. Stocktwits eventually became a standalone network where users share market sentiment, ideas, and strategies in real-time.
History
Founding and Early Years (2008-2013)
The idea for the company came from a 2008 blog post where Lindzon suggested that Twitter would be great for stocks and markets even though he once passed on the opportunity to invest in the company. Stocktwits was launched in 2008 by Howard Lindzon and Soren McBeth as a Twitter application that organized financial discussions using cashtags. Lindzon teamed up with Soren Macbeth to form the company in 2009. The company utilized Twitter's application programming interface (API) to integrate StockTwits as its own "highly graphical platform of market news, sentiment and stock-picking tools." StockTwits utilized "cashtags" with the stock ticker symbol, similar to the Twitter hashtag, as a way of indexing people's thoughts and ideas about a company and the stock.
Cashtags were eventually adopted by Twitter itself in 2012.
StockTwits received the first Shorty Award in the 2008 finance category.
StockTwits began offering a service in 2011 that allows companies to manage and monitor information within the service. Lindzon says it also allows a company to "monitor discussion about the company."
Time magazine listed StockTwits as one of its 2010 "50 best websites." StockTwits was named one of the "top 10 most innovative companies in finance" in 2012 by FastCompany.
Expansion and Product Innovation (2013-2023)
As of June 2013, StockTwits had raised $8.6 million in venture capital but has not yet made a profit. Fifty percent of the company's revenue comes from financial data sold to clients including Bloomberg L.P. and Google. Lindzon believes a "cultural change" is needed for the large financial institutions to embrace this technology and that change may take as long as five years.
In 2016, Lindzon decided to step down as CEO, although he remained actively involved in the company as its executive chairman.
In June 2016, StockTwits Inc. announced Ian Rosen, a co-founder of Even Financial and former general manager at MarketWatch, as chief executive officer.
In January 2017, StockTwits Inc. acquired Investing Discovery Platform SparkFin Inc.
In December 2017, Stocktwits announced its redesigned web and mobile sites with a new feature called Discover that provided user with important stock information, curated content and earnings calendars.
In August 2018, Stocktwits announced its launch of Rooms, a product that enables users to create new communities based on shared interests, specific stocks or trends affecting the markets.
In October 2018, Stocktwits announced the launch of its Premium Rooms product at Stocktoberfest West, the company's premier event held in Coronado, California. The new feature as a part of Rooms offers users unprecedented access to exclusive content, concepts, and analysis from top investors on a subscription basis.
In December 2018, Brian Norgard, former head of product at dating app Tinder, joined the board of directors of StockTwits to help the company expand into new areas of financial technology and media.
In April 2019, Stocktwits announced it would launch a Stocktwits Trade App, a zero-commission brokerage service that allowed users to place unlimited free equity trades and featured fractional investments. The app was offered by Stocktwits subsidiary ST Invest LLC, a registered broker-dealer and member of the Financial Industry Regulatory Authority.
The platform also explored various monetization strategies, including premium subscriptions and B2B data offerings. During this time, the company announced its entry into crypto trading and expanded its presence in international markets, including India.
In December 2021, Stocktwits raised 30M$ for its seed B funding round and announced its next big moves including a boost in its crypto coverage and plans to launch in India by Q2 in 2022. The series B funding set Stocktwits at a valuation of $210 million.
In February 2022, Stocktwits launched its crypto trading platform.
In July 2022, Stocktwits launched equities trading on its platforms for individual investors.
In July 2024, Stockwits partnered with Quatr to offer financial content via an API, including live and recorded earnings calls from public companies; annual reports and quarterly filings and more.
Howard Lindzon's Return as CEO (2024-Present)
In 2024, Howard Lindzon returned as CEO of Stocktwits. Under Lindzon's leadership, Stocktwits decided to exit the brokerage business by selling TradeApp to Public.com in 2024.
As a part of Howard's return, Stocktwits' leadership team included Shiv Sharma, President and COO.
Use
In 2013, StockTwits had over 230,000 active members; by 2020 that number had increased to 3 million, and by 2021 the homepage dashboard shows 5 million members. Lindzon encourages new traders to spend time on StockTwits learning the language. He says that "You don’t learn Spanish in one day, and you’re not going to learn how to invest in stocks in one day. Treat it like a language." Lindzon coauthored the book The StockTwits Edge: 40 Actionable Trade Set-Ups from Real Market Pros with Philip Pearlman and Ivaylo Ivanoff to assist new users with getting the most out of the service.
Platform features
StockTwits allows users to communicate to Ticker Streams in real time with the use of cashtags. Users are also able to communicate directly using the "@" symbol before a username, a feature seen on Twitter.
Content featured on StockTwits can also be shared to the StockTwits extended network which includes sites such as Yahoo Finance and CNN Money. Users also have the ability to share content to their personal Twitter, LinkedIn and Facebook accounts.
As of 2012 StockTwits has an open API allowing other sites such as TradingView, LikeAssets and HootSuite to integrate their users with StockTwits.
Cashtags
The cashtag, a concept introduced by Stocktwits, allows users to track conversations related to specific stocks by preceding the ticker symbol with a dollar sign. This feature allows users to follow discussions and insights related to their investments.
TradeApp (2019-2024)
TradeApp provided users with commission-free trading directly through the platform. The app was designed to compete with other retail brokerage services and to integrate trading with the social aspects of Stocktwits. However, in 2024, the brokerage business was sold to Public.com.
Data and Analytics
Stocktwits has amassed user-generated content over the years, which it has leveraged to provide data and analytics. The platform's data on retail investor sentiment has been packaged into products for investors. The company has explored various monetization strategies, including the sale of its data through APIs and premium subscriptions for advanced analytics.
Stocktwits India
In 2021, Stocktwits expanded into India through a partnership with Times Bridge, the VC arm of the Times Group.
Stocktwits India has also formed partnerships with top financial publishers, including Economic Times, CNBC TV18, and Business Insider India.
Controversy
Twitter incorporated the cashtags into their platform in 2012 effectively "hijacking" the StockTwits idea. In response to this announcement, Lindzon blogged that "It's interesting that Twitter has hijacked our creation of $TICKER i.e. $AAPL". He went on to note that "You can hijack a plane but it does not mean you know how to fly it." Lindzon sold all of his Twitter stock in 2012 as a result of this controversy.
See also
X (Formerly Twitter)
Public.com
Robinhood Markets
WallStreetBets
References
Further reading
External links
StockTwits website
Android (operating system) software
Companies based in San Diego
Social media companies of the United States
IOS software
Real-time web
Text messaging
Twitter services and applications
Stock traders
Shorty Award winners | StockTwits | [
"Technology"
] | 1,761 | [
"Real-time web",
"Real-time computing"
] |
39,516,536 | https://en.wikipedia.org/wiki/Ton-force | A ton-force is one of various units of force defined as the weight of one ton due to standard gravity. The precise definition depends on the definition of ton used.
Tonne-force
The tonne-force (tf or tf) is equal to the weight of one tonne.
{|
|-
|rowspan=6 width=120 valign=top|one tonne-force
|= kilograms-force (kgf)
|-
|= kilonewtons (kN)
|-
|≈ pounds-force (lbf)
|-
|≈ long tons-force
|-
|≈ short tons-force
|-
|≈ poundals (pdl)
|}
Long ton-force
The long ton-force is equal to the weight of one long ton.
{|
|-
|rowspan=5 width=120 valign=top|one long ton-force
|= lbf
|-
|= kgf
|-
|=
|-
|= 1.12 short tons-force
|-
|≈ pdl
|}
Short ton-force
The short ton-force is equal to the weight of one short ton.
{|
|-
|rowspan=5 width=120 valign=top|one short ton-force
|= lbf
|-
|= kgf
|-
|=
|-
|≈ long tons-force
|-
|≈ pdl
|}
Notes
Units of force | Ton-force | [
"Physics",
"Mathematics"
] | 292 | [
"Force",
"Physical quantities",
"Quantity",
"Units of force",
"Units of measurement"
] |
39,516,879 | https://en.wikipedia.org/wiki/SiC%E2%80%93SiC%20matrix%20composite | SiC–SiC matrix composite is a particular type of ceramic matrix composite (CMC) which have been accumulating interest mainly as high temperature materials for use in applications such as gas turbines, as an alternative to metallic alloys. CMCs are generally a system of materials that are made up of ceramic fibers or particles that lie in a ceramic matrix phase. In this case, a SiC/SiC composite is made by having a SiC (silicon carbide) matrix phase and a fiber phase incorporated together by different processing methods. Outstanding properties of SiC/SiC composites include high thermal, mechanical, and chemical stability while also providing high strength to weight ratio.
Processing
SiC/SiC composites are mainly processed through three different methods. However, these processing methods are often subjected to variations in order to create the desired structure or property:
Chemical Vapor Infiltration (CVI) – The CVI method uses a gas phase SiC precursor to first grow SiC whiskers or nanowires in a preform, using conventional techniques developed with CVD. Following the growth of the fibers, the gas is again infiltrated into the preform to densify and create the matrix phase. Generally, the densification rate is slow during CVI, thus this process creates relatively high residual porosity (10–15%).
Polymer Impregnation and Pyrolysis (PIP) – The PIP method uses preceramic polymers (polymeric SiC precursors) to infiltrate a fibrous preform to create a SiC matrix. This method yields low stoichiometry as well as crystallinity due to the polymer-to-ceramic conversion process (ceramization). Additionally, shrinkage also occurs during this conversion process, resulting in 10–20% residual porosity. Multiple infiltrations can be performed to compensate for the shrinkage.
Melt Infiltration (MI) – The MI method has several variations, including using a dispersion of SiC particulate slurry to infiltrate into the fiberous preform, or using CVI to coat carbon on the SiC fibers, followed with infiltrating liquid Si to react with the carbon to form SiC. With these methods, chemical reactivity, melt viscosity, and wetting between the two components should be considered carefully. Some issues with infiltrating melted Si is that the free Si can lower the composite's resistance to oxidation and creep. However, this technique usually yields lower residual porosity (~5%) compared to the other two techniques due to higher densification rates.
Properties
Mechanical
Mechanical properties of CMCs, including SiC–SiC composites can vary depending on the properties of their various components, namely, the fiber, matrix, and interphases. For example, the size, composition, crystallinity, or alignment of the fibers will dictate the properties of the composite. The interplay between matrix microcracking and fiber-matrix debonding often dominates the failure mechanism of SiC/SiC composites. This results in SiC/SiC composites having non-brittle behavior despite being fully ceramic. Additionally, creep rates at high temperatures are also extremely low, but still dependent on its various constituents.
Thermal
SiC–SiC composites have a relatively high thermal conductivity and can operate at very high temperatures due to their inherently high creep and oxidation resistance. Residual porosity and stoichiometry of the material can vary its thermal conductivity, with increasing porosity leading to lower thermal conductivity and presence of Si–O–C phase also leading to lower thermal conductivity. In general, a typical well processed SiC–SiC composite can achieve a thermal conductivity of around 30 W/m-K at .
Chemical
Since SiC–SiC composites are generally sought for in high temperature applications, their oxidation resistance is of high importance. The oxidation mechanism for SiC–SiC composites vary depending on the temperature range, with operation in the higher temperature range (>1000 °C) being more beneficial than at lower temperatures (<1000 °C). In the former case, passive oxidation generates a protective oxide layer wheres in the latter case, oxidation degrades the fiber-matrix interface. Nonetheless, oxidation is an issue and environmental barrier coatings are being investigated to address this issue.
Applications
Aerospace
Silicon carbide (SiC) ceramic matrix composites (CMCs) are a specific application of
engineering ceramic materials used to enhance aerospace applications such as turbine engine
components and thermal protection systems. Due to exhibiting high temperature capabilities, low
density, and resistance to oxidation and corrosion, SiC/SiC CMCs are largely used in aerospace
applications. The use of SiC/SiC CMCs on rotating engine components reduce the complexity of
design and engine structure weight, providing improved performance and fuel emissions. The
implementation of SiC/SiC ceramic matrix components will improve aircraft and space vehicle
performance and fuel efficiency, reducing additional harm to the environment in a cost-effective
manner.
Additional applications of SiC/SiC CMCs include combustion and turbine section components of
aero-propulsion and land-based gas turbine engines, thermal protection systems, thruster nozzles,
reusable rocket nozzles, and turbopump components for space vehicles.
With the development and implementation of future SiC/SiC CMCs, the SiC fiber creep and
rupture properties must be examined. Defects such as grain size, impurities, porosity, and surface
toughness all contribute to SiC fiber creep and rupture. Due to relatively low toughness, low
damage tolerance, and large variability in mechanical properties, CMCs have been limited to less
critical components. In the future, the implementation of greater SiC/SiC CMCs into aerospace
applications is hindered by lack of understanding of ceramic material characteristics,
degradation, mechanisms, and interactions to prevent component life and broaden component
design.
References
Materials science
Ceramic materials
Composite materials
Turbines | SiC–SiC matrix composite | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,182 | [
"Applied and interdisciplinary physics",
"Turbomachinery",
"Composite materials",
"Materials science",
"Turbines",
"Materials",
"Ceramic materials",
"nan",
"Ceramic engineering",
"Matter"
] |
43,696,853 | https://en.wikipedia.org/wiki/Mechanical%20load | Mechanical load is the physical stress on a mechanical system or component leading to strain. Loads can be static or dynamic. Some loads are specified as part of the design criteria of a mechanical system. Depending on the usage, some mechanical loads can be measured by an appropriate test method in a laboratory or in the field.
Vehicle
It can be the external mechanical resistance against which a machine (such as a motor or engine), acts. The load can often be expressed as a curve of force versus speed.
For instance, a given car traveling on a road of a given slope presents a load which the engine must act against. Because air resistance increases with speed, the motor must put out more torque at a higher speed in order to maintain the speed. By shifting to a higher gear, one may be able to meet the requirement with a higher torque and a lower engine speed, whereas shifting to a lower gear has the opposite effect. Accelerating increases the load, whereas decelerating decreases the load.
Pump
Similarly, the load on a pump depends on the head against which the pump is pumping, and on the size of the pump.
Fan
Similar considerations apply to a fan. See Affinity laws.
See also
Structural load - mechanical load applied to structural elements (in civil and mechanical engineering)
Physical test
References
Mechanical engineering
Physical quantities
Structural analysis | Mechanical load | [
"Physics",
"Mathematics",
"Engineering"
] | 265 | [
"Structural engineering",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Structural analysis",
"Mechanical engineering",
"Aerospace engineering",
"Physical properties"
] |
43,699,175 | https://en.wikipedia.org/wiki/Light-front%20quantization%20applications | The light-front quantization of quantum field theories provides a useful alternative to ordinary equal-time quantization. In particular, it can lead to a relativistic description of bound systems in terms of quantum-mechanical wave functions. The quantization is based on the choice of light-front coordinates, where plays the role of time and the corresponding spatial coordinate is . Here, is the ordinary time, is a Cartesian coordinate, and is the speed of light. The other two Cartesian coordinates, and , are untouched and often called transverse or perpendicular, denoted by symbols of the type . The choice of the frame of reference where the time and -axis are defined can be left unspecified in an exactly soluble relativistic theory, but in practical calculations some choices may be more suitable than others. The basic formalism is discussed elsewhere.
There are many applications of this technique, some of which are discussed below. Essentially, the analysis of any relativistic quantum system can benefit from the use of light-front coordinates and the associated quantization of the theory that governs the system.
Nuclear reactions
The light-front technique was brought into nuclear physics by the pioneering papers of Frankfurt and Strikman. The emphasis was on using the correct kinematic variables (and the corresponding simplifications achieved) in making correct treatments of high-energy nuclear reactions. This sub-section focuses on only a few examples.
Calculations of deep inelastic scattering from nuclei require knowledge of nucleon distribution functions within the nucleus. These functions give the probability that a nucleon of momentum carries a given fraction of the plus component of the nuclear momentum, , .
Nuclear wave functions have been best determined using the equal-time framework. It therefore seems reasonable to see if one could re-calculate nuclear wave functions using the light front formalism. There are several basic nuclear structure problems which must be handled to establish that any given method works. It is necessary to compute the deuteron wave function, solve mean-field theory (basic nuclear shell model) for infinite nuclear matter and for finite-sized nuclei, and improve the mean-field theory by including the effects of nucleon-nucleon correlations. Much of nuclear physics is based on rotational invariance, but manifest rotational invariance is lost in the light front treatment. Thus recovering rotational invariance is very important for nuclear applications.
The simplest version of each problem has been handled. A light-front treatment of the deuteron was accomplished by Cooke and Miller, which stressed recovering rotational invariance. Mean-field theory for finite nuclei was handled Blunden et al. Infinite nuclear matter was handled within mean-field theory and also including correlations. Applications to deep inelastic scattering were made by Miller and Smith. The principal physics conclusion is that the EMC effect (nuclear modification of quark distribution functions) cannot be explained within the framework of conventional nuclear physics. Quark effects are needed. Most of these developments are discussed in a review by Miller.
There is a new appreciation that initial and final-state interaction physics, which is not intrinsic to the hadron or nuclear light-front wave functions, must be addressed in order to understand phenomena such as single-spin asymmetries, diffractive processes, and nuclear shadowing. This motivates extending LFQCD to the theory of reactions and to investigate high-energy collisions of hadrons. Standard scattering theory in Hamiltonian frameworks can provide valuable guidance for developing a LFQCD-based analysis of high-energy reactions.
Exclusive processes
One of the most important areas of application of the light-front formalism are exclusive hadronic processes. "Exclusive processes" are scattering reactions in which the kinematics of the initial state and final state particles are measured and thus completely specified; this is in contrast to "inclusive" reactions where one or more particles in the final state are not directly observed. Prime examples are the elastic and inelastic form factors measured in the exclusive lepton-hadron scattering processes such as In inelastic exclusive processes, the initial and final hadrons can be different, such as . Other examples of exclusive reactions are Compton scattering , pion photoproduction and elastic hadron scattering such as . "Hard exclusive processes" refer to reactions in which at least one hadron scatters to large angles with a significant change in its transverse momentum.
Exclusive processes provide a window into the bound-state structure of hadrons in QCD as well as the fundamental processes which control hadron dynamics at the amplitude level. The natural calculus for describing the bound-state structure of relativistic composite systems, needed for describing exclusive amplitudes, is the light-front Fock expansion which encodes the multi-quark, gluonic, and color correlations of a hadron in terms of frame-independent wave functions. In hard exclusive processes, in which hadrons receive a large momentum transfer, perturbative QCD leads to factorization theorems which separate the physics of hadronic bound-state structure from that of the relevant quark and gluonic hard-scattering reactions which underlie these reactions. At leading twist, the bound-state physics is encoded in terms of universal "distribution amplitudes", the fundamental theoretical quantities which describe the valence quark substructure of hadrons as well as nuclei. Nonperturbative methods, such as AdS/QCD, Bethe–Salpeter methods, discretized light-cone quantization, and transverse lattice methods, are now providing nonperturbative predictions for the pion distribution amplitude. A basic feature of the gauge theory formalism is color transparency", the absence of initial and final-state interactions of rapidly moving compact color-singlet states. Other applications of the exclusive factorization analysis include semileptonic meson decays and deeply virtual Compton scattering, as well as dynamical higher-twist effects in inclusive reactions. Exclusive processes place important constraints on the light-front wave functions of hadrons in terms of their quark and gluon degrees of freedom as well as the composition of nuclei in terms of their nucleon and mesonic degrees of freedom.
The form factors measured in the exclusive reaction encode the deviations from unity of the scattering amplitude due to the hadron's compositeness. Hadronic form factors fall monotonically with spacelike momentum transfer, since the amplitude for the hadron to remain intact continually decreases. One can also distinguish experimentally whether the spin orientation (helicity) of a hadron such as the spin-1/2 proton changes during the scattering or remains the same, as in the Pauli (spin-flip) and Dirac (spin-conserving) form factors.
The electromagnetic form factors of hadrons are given by matrix elements of the electromagnetic current such as where is the momentum four-vector of the exchanged virtual photon and is the eigenstate for hadron with four momentum . It is convenient to choose the light-front frame where with The elastic and inelastic form factors can then be expressed as integrated overlaps of the light-front Fock eigenstate wave functions and of the initial and final-state hadrons, respectively. The of the struck quark is unchanged, and . The unstruck (spectator) quarks have . The result of the convolution gives the form factor exactly for all momentum transfer when one sums over all Fock states of the hadron. The frame choice is chosen since it eliminates off-diagonal contributions where the number of initial and final state particles differ; it was originally discovered by Drell and Yan and by West. The rigorous formulation in terms of light-front wave functions is given by Brodsky and Drell.
Light-front wave functions are frame-independent, in contrast to ordinary instant form wave functions which need to be boosted from to , a difficult dynamical problem, as emphasized by Dirac. Worse, one must include contributions to the current matrix element where the external photon interacts with connected currents arising from vacuum fluctuations in order to obtain the correct frame-independent result. Such vacuum contributions do not arise in the light-front formalism, because all physical lines have positive ; the vacuum has only , and momentum is conserved.
At large momentum transfers, the elastic helicity-conserving form factors fall-off as the nominal power where is the minimum number of constituents. For example, for the three-quark Fock state of the proton. This "quark counting rule" or "dimensional counting rule" holds for theories such as QCD in which the interactions in the Lagrangian are scale invariant (conformal). This result is a consequence of the fact that form factors at large momentum transfer are controlled by the short distance behavior of the hadron's wave function which in turn is controlled by the "twist" (dimension - spin) of the leading interpolating operator which can create the hadron at zero separation of the constituents. The rule can be generalized to give the power-law fall-off of inelastic form factors and form factors in which the hadron spin changes between the initial and final states. It can be derived nonperturbatively using gauge/string theory duality and with logarithmic corrections from perturbative QCD.
In the case of elastic scattering amplitudes, such as , the dominant physical mechanism at large momentum transfer is the exchange of the quark between the kaon and the proton . This amplitude can be written as a convolution of the four initial and final state light-front valence Fock-state wave functions. It is convenient to express the amplitude in terms of Mandelstam variables, where, for a reaction with momenta , the variables are . The resulting "quark interchange" amplitude has the leading form which agrees well with the angular dependence and power law fall-off of the amplitude with momentum transfer at fixed CM angle . The behavior of the amplitude, at fixed but large momentum transfer squared , shows that the intercept of Regge amplitudes at large negative . The nominal power-law fall-off of the resulting hard exclusive scattering cross section for at fixed CM angle is consistent with the dimensional counting rule for hard elastic scattering , where is the minimum number of constituents.
More generally, the amplitude for a hard exclusive reaction in QCD can be factorized at leading power as a product of the hard-scattering subprocess quark scattering amplitude , where the hadrons are each replaced with their constituent valence quarks or gluons, with their respective light-front momenta , convoluted with the "distribution amplitude" for each initial and final hadron. The hard-scattering amplitude can then be computed systematically in perturbative QCD from the fundamental quark and gluon interactions of QCD. This factorization procedure can be carried out systematically since the effective QCD running coupling becomes small at high momentum transfer, because of the asymptotic freedom property of QCD.
The physics of each hadron enters through its distribution amplitudes , which specifies the partitioning of the light-front momenta of the valence constituents . It is given in light-cone gauge as , the integral of the valence light-front wave function over the internal transverse momentum squared ; the upper limit is the characteristic transverse momentum in the exclusive reaction. The logarithmic evolution of the distribution amplitude in is given rigorously in perturbative QCD by the ERBL evolution equation. The results are also consistent with general principles such as the renormalization group. The asymptotic behavior of the distribution such as where is the decay constant measured in pion decay can also be determined from first principles. The nonperturbative form of the hadron light-front wave function and distribution amplitude can be determined from AdS/QCD using light-front holography. The deuteron distribution amplitude has five components corresponding to the five different color-singlet combinations of six color triplet quarks, only one of which is the standard nuclear physics product of two color singlets. It obeys a evolution equation leading to equal weighting of the five components of the deuteron's light-front wave function components at The new degrees of freedom are called "hidden color". Each hadron emitted from a hard exclusive reaction emerges with high momentum and small transverse size. A fundamental feature of gauge theory is that soft gluons decouple from the small color-dipole moment of the compact fast-moving color-singlet wave function configurations of the incident and final-state hadrons. The transversely compact color-singlet configurations can persist over a distance of order , the Ioffe coherence length. Thus, if we study hard quasi elastic processes in a nuclear target, the outgoing and ingoing hadrons will have minimal absorption - a novel phenomenon called "color transparency". This implies that quasi-elastic hadron-nucleon scattering at large momentum transfer can occur additively on all of the nucleons in a nucleus with minimal attenuation due to elastic or inelastic final state interactions in the nucleus, i.e. the nucleus becomes transparent. In contrast, in conventional Glauber scattering, one predicts nearly energy-independent initial and final-state attenuation. Color transparency has been verified in many hard-scattering exclusive experiments, particularly in the diffractive dijet experiment at Fermilab. This experiment also provides a measurement of the pion's light-front valence wave function from the observed and transverse momentum dependence of the produced dijets.
Light-front holography
One of the most interesting recent advances in hadron physics has been the application to QCD of a branch of string theory, Anti-de Sitter/Conformal Field Theory (AdS/CFT). Although QCD is not a conformally invariant field theory, one can use the mathematical representation of the conformal group in five-dimensional anti-de Sitter space to construct an analytic first approximation to the theory. The resulting model, called AdS/QCD, gives accurate predictions for hadron spectroscopy and a description of the quark structure of mesons and baryons which has scale invariance and dimensional counting at short distances, together with color confinement at large distances.
"Light-Front Holography" refers to the remarkable fact that dynamics in AdS space in five dimensions is dual to a semiclassical approximation to Hamiltonian theory in physical space-time quantized at fixed light-front time. Remarkably, there is an exact correspondence between the fifth-dimension coordinate of AdS space and a specific impact variable which measures the physical separation of the quark constituents within the hadron at fixed light-cone time and is conjugate to the invariant mass squared . This connection allows one to compute the analytic form of the frame-independent simplified light-front wave functions for mesons and baryons that encode hadron properties and allow for the computation of exclusive scattering amplitudes.
In the case of mesons, the valence Fock-state wave functions of for zero quark mass satisfy a single-variable relativistic equation of motion in the invariant variable , which is conjugate to the invariant mass squared . The effective confining potential in this frame-independent "light-front Schrödinger equation" systematically incorporates the effects of higher quark and gluon Fock states. Remarkably, the potential has a unique form of a harmonic oscillator potential if one requires that the chiral QCD action remains conformally invariant. The result is a nonperturbative relativistic light-front quantum mechanical wave equation which incorporates color confinement and other essential spectroscopic and dynamical features of hadron physics.
These recent developments concerning AdS/CFT duality provide new insights about light-front wave functions which may form first approximations to the full solutions that one seeks in LFQCD, and be considered as a step in building a physically motivated Fock-space basis set to diagonalize the LFQCD Hamiltonian, as in the basis light-front quantization (BLFQ) method.
Prediction of the cosmological constant
A major outstanding problem in theoretical physics is that most quantum field theories predict a huge value for the quantum vacuum. Such arguments are usually based on dimensional analysis and effective field theory. If the universe is described by an effective local quantum field theory down to the Planck scale, then we would expect a cosmological constant of the order of . As noted above, the measured cosmological constant is smaller than this by a factor of 10−120. This discrepancy has been called "the worst theoretical prediction in the history of physics!".
A possible solution
is offered by light front quantization, a rigorous alternative to the usual second quantization method. Vacuum fluctuations do not appear in the Light-Front vacuum state,. This absence means that there is no contribution from QED, Weak interactions and QCD to the cosmological constant which is thus predicted to be zero in a flat space-time. The measured small non-zero value of the cosmological constant could originate for example from a slight curvature of the shape of the universe (which is not excluded within 0.4% (as of 2017)) since a curved-space could modify the Higgs field zero-mode, thereby possibly producing a non-zero contribution to the cosmological constant.
Intense lasers
High-intensity laser facilities offer prospects for directly measuring previously unobserved processes in QED, such as vacuum birefringence, photon-photon scattering and, still some way in the future, Schwinger pair production. Furthermore, `light-shining-through-walls' experiments can probe the low energy frontier of particle physics and search for beyond-standard-model particles. These possibilities have led to great interest in the properties of quantum field theories, in particular QED, in background fields describing intense light sources, and some of the fundamental predictions of the theory have been experimentally verified.
Despite the basic theory behind `strong-field QED' having been developed over 40 years ago, there have remained until recent years several theoretical ambiguities that can in part be attributed to the use of the instant-form in a theory which, because of the laser background, naturally singles out light-like directions. Thus, light-front quantization is a natural approach to physics in intense laser fields. The use of the front-form in strong-field QED has provided answers to several long standing questions, such as the nature of the effective mass in a laser pulse, the pole structure of the background-dressed propagator, and the origins of classical radiation reaction within QED.
Combined with nonperturbative approaches such as `time dependent basis light-front quantization', which is specifically targeted at time-dependent problems in field theory, the front-form promises to provide a better understanding of QED in external fields. Such investigations will also provide groundwork for understanding QCD physics in strong magnetic fields at, for example, RHIC.
Nonperturbative quantum field theory
Quantum Chromodynamics (QCD), the theory of strong interactions, is a part of the Standard Model of elementary particles that also includes, besides QCD, the theory of electro-weak (EW) interactions. In view of the difference in strength of these interactions, one may treat the EW interactions as a perturbation in systems consisting of hadrons, the composite particles that respond to the strong interactions. Perturbation theory has its place in QCD also, but only at large values of the transferred energy or momentum where it exhibits the property of asymptotic freedom. The field of perturbative QCD is well developed and many phenomena have been described using it, such as factorization, parton distributions, single-spin asymmetries, and jets. However, at low values of the energy and momentum transfer, the strong interaction must be treated in a nonperturbative manner, since the interaction strength becomes large and the confinement of quarks and gluons, as the partonic components of the hadrons, cannot be ignored. There is a wealth of data in this strong interaction regime that is waiting for explanation in terms of calculations proceeding directly from the underlying theory. As one prominent application of an ab initio approach to QCD, many extensive experimental programs either measure directly, or depend upon the knowledge of, the probability distributions of the quark and gluon components of the hadrons.
Three approaches have produced considerable success in the strong-coupling area up to the present. First, hadronic models have been formulated and applied successfully. This success comes sometimes at the price of introducing parameters that need to be identified quantitatively. For example, the Relativistic String Hamiltonian depends on the current quark masses, the string tension, and a parameter corresponding to . The second method, lattice QCD, is an ab initio approach directly linked to the Lagrangian of QCD. Based on a Euclidean formulation, lattice QCD provides an estimate of the QCD path integral and opens access to low-energy hadronic properties such as masses. Although lattice QCD can estimate some observables directly, it does not provide the wave functions that are needed for the description of the structure and dynamics of hadrons. Third is the Dyson—Schwinger approach. It is also formulated in Euclidean space-time and employs models for vertex functions.
The light-front Hamiltonian approach is a fourth approach, which, in contrast to the lattice and Dyson–Schwinger approaches, is developed in Minkowski space and deals directly with wave functions - the main objects of quantum theory. Unlike the modeling approach, it is rooted in the fundamental Lagrangian of QCD.
Any field-theoretical Hamiltonian does not conserve the number of particles. Therefore, in the basis, corresponding to fixed number of particles, it is a non-diagonal matrix. Its eigenvector—the state vector of a physical system—is an infinite superposition (Fock decomposition) of the states with different numbers of particles:
is the -body wave function (Fock component) and is an integration measure. In light-front quantization, the Hamiltonian and the state vector here are defined on the light-front plane.
In many cases, though not always, one can expect that a finite number of degrees of freedom dominates, that is, the decomposition in the Fock components converges enough quickly. In these cases the decomposition can be truncated, so that the infinite sum can be approximately replaced by a finite one. Then, substituting the truncated state vector in the eigenvector equation
one obtains a finite system of integral equations for the Fock wave functions which can be solved numerically. Smallness of the coupling constant is not required. Therefore, the truncated solution is nonperturbative. This is the basis of a nonperturbative approach to the field theory which was developed and, for the present, applied to QED and to the Yukawa model.
The main difficulty in this way is to ensure cancellation of infinities after renormalization. In the perturbative approach, for a renormalizable field theory, in any fixed order of coupling constant, this cancellation is obtained as a by-product of the renormalization procedure. However, to ensure the cancellation, it is important to take into account the full set of graphs at a given order. Omitting some of these graphs destroys the cancellation and the infinities survive after renormalization. This is what happens after truncation of the Fock space; though the truncated solution can be decomposed into an infinite series in terms of the coupling constant, at any given order the series does not contain the full set of perturbative graphs. Therefore, the standardrenormalization scheme does not eliminate infinities.
In the approach of Brodsky et al. the infinities remain uncanceled, though it is expected that as soon as the number of sectors kept after truncation increases, the domain of stability of the results relative to the cutoff also increases. The value on this plateau of stability is just an approximation to the exact solution which is taken as the physical value.
The sector-dependent approach is constructed so as to restore cancellation of infinities for any given truncation. The values of the counterterms are constructed from sector to sector according to unambiguously formulated rules. The numerical results for the anomalous magnetic moment of fermion in the truncation keeping three Fock sectors are stable relative to increase of the cutoff. However, the interpretation of the wave functions, due to negative norm of the Pauli-Villars states introduced for regularization, becomes problematic. When the number of sectors increases, the results in both schemes should tend to each other and approach to the exact nonperturbative solution.
The light-front coupled-cluster approach (see Light-front computational methods#Light-front coupled-cluster method), avoids making a Fock-space truncation. Applications of this approach are just beginning.
Structure of hadrons
Experiments that need a conceptually and mathematically precise theoretical description of hadrons at the amplitude level include investigations of: the structure of nucleons and mesons, heavy quark systems and exotics, hard processes involving quark and gluon distributions in hadrons, heavy ion collisions, and many more. For example, LFQCD will offer the opportunity for an ab initio understanding of the microscopic origins of the spin content of the proton and how the intrinsic and spatial angular momenta are distributed among the partonic components in terms of the wave functions. This is an outstanding unsolved problem as experiments to date have not yet found the largest components of the proton spin. The components previously thought to be the leading carriers, the quarks, have been found to carry a small amount of the total spin. Generalized parton distributions (GPDs) were introduced to quantify each component of the spin content and have been used to analyze the experimental measurements of deeply virtual Compton scattering (DVCS). As another example, LFQCD will predict the masses, quantum numbers and widths of yet-to-be observed exotics such as glueballs and hybrids.
QCD at high temperature and density
There are major programs at accelerator facilities such as GSI-SIS, CERN-LHC, and BNL-RHIC to investigate the properties of a new state of matter, the quark–gluon plasma, and other features of the QCD phase diagram. In the early universe, temperatures were high, while net baryon densities were low. In contrast, in compact stellar objects, temperatures are low, and the baryon density is high. QCD describes both extremes. However, reliable perturbative calculations can only be performed at asymptotically large temperatures and densities, where the running coupling constant of QCD is small due to asymptotic freedom, and lattice QCD provides information only at very low chemical potential (baryon density). Thus, many frontier questions remain to be answered. What is the nature of the phase transitions? How does the matter behave in the vicinity of the phase boundaries? What are the observable signatures of the transition in transient heavy-ion collisions? LFQCD opens a new avenue for addressing these issues.
In recent years a general formalism to directly compute the partition function in light-front quantization has been developed and numerical methods are under development for evaluating this partition function in LFQCD. Light-front quantization leads to new definitions of the partition function and temperature which can provide a frame-independent description of thermal and statistical systems. The goal is to establish a tool comparable in power to lattice QCD but extending the partition function to finite chemical potentials where experimental data are available.
See also
Light front quantization
Light-front computational methods
Quantum field theories
Quantum chromodynamics
Quantum electrodynamics
Light-front holography
References
External links
ILCAC, Inc., the International Light-Cone Advisory Committee.
Publications on light-front dynamics, maintained by A. Harindranath.
Quantum field theory | Light-front quantization applications | [
"Physics"
] | 5,847 | [
"Quantum field theory",
"Quantum mechanics"
] |
43,700,988 | https://en.wikipedia.org/wiki/Kaldo%20converter | A Kaldo converter (using the Kaldo process or Stora-Kaldo process) is a rotary vessel oxygen based metal refining method. Originally applied to the refining of iron into steel, with most installations in the 1960s, the process is (2014) used primarily to refine non ferrous metals, typically copper. In that field, it is often named TBRC, or Top Blown Rotary Converter.
History and description
Steel production
The name "Kaldo" is derived from Prof. Bo Kalling, and from the Domnarvets Jernverk (Stora Kopparbergs Bergslag subsidiary) both key in the development of the process. Research into the use of a stirring to promote mixing, and therefore rate of conversion was investigated from the 1940s, and investigations into the use of oxygen began c.1948. The feedstock at the Domnarvet works had a phosphorus content of 1.8-2.0% and so the process was developed with one aim being dephosphorisation. The first production unit was installed in 1954 at Domnarvet Jernverk.
The converter was a top blow oxygen converter, similar to Linz-Donawitz (LD) type, using a cylindrical vessel; the vessel was tilted whilst conversion took place, with typical rotation speeds of around 30 revolutions per minute; the oxygen was injected via a lance, with slag forming materials added separately.
Kaldo converters were relatively common in the 1960s in the United Kingdom, during the transition from predominately open hearth process steelmaking to oxygen based steelmaking techniques. Converters were installed at Consett steelworks, Park Gate, Rotherham, Shelton works, Stoke-on-Trent; and Stanton Iron Works. Before the advent of the basic-LD process the Kaldo method was a preferred one in the UK for converting high phosphorus iron. The first unit in the UK was at Park Gate Works, Rotherham.
In the USA, the process was installed at the Sharon Steel Corporation (c.1962). A plant in Japan was installed for Sanyo Special Steel Co. (Himejii) in c.1965. A combined type of converter (LD-Kaldo), using elements of the Linz-Donawitz (LD) and Kaldo processes was installed 1965 in Belgium at Cockerill-Ougrée-Providence's plant in Marchienne-au-Pont as a multicompany research venture. In France, one Kaldo furnace was also installed (one 160t unit, 1960) at Sollac's . It was followed in 1969 by two huge 240t units, the biggest Kaldo converters never built (two times bigger than the previous bigger ones : 1000t rotating at 30 r.p.m. !), at Wendel-Sidelor's (later Usinor-Sacilor) (Lorraine, France); these two converters did not meet expectations and the third additional planned Kaldo unit was not installed, instead two OLP (oxygène-lance-poudre) 240t units were used.
Disadvantages of the process, compared to non-rotating oxygen furnaces (e.g. LD type) were the higher capital cost, more difficult to upscale to higher outputs, and additional complexity (i.e. rotating parts and loading thereof). Advantages included the ability to use a high proportion of scrap metal, and good controllability of final steel specification. At the Park Gate works conversion time was 90 minutes, with up to 45% scrap loading, with a capacity of 75t in a 500t total, diameter converter, with a rotation speed of 40 revs per minute.
Due to high maintenance costs the Kaldo converter did not gain widespread usage in the steel industry, with non-rotating converters being preferred.
Non-ferrous production
Nickel matte was converted by Inco (Canada) in a pilot Kaldo converter in 1959, and Metallo-Chimique (Belgium) developed secondary copper smelting using the Kaldo type converters in the late 1960s. The Kaldo type converted is commonly known as a Top-Blown Rotary Converter (TBRC) in non-ferrous metal smelting terminology.
By the 1970s, the Kaldo furnace was in common use for copper and nickel smelting. A Kaldo converter for the smelting of lead was constructed by Boliden AB in Sweden in 1976.
Kaldo secondary copper units were still in use worldwide at the beginning of the 21st century, but as of 2011 no new units had been commissioned for around 10 years, suggesting that the process had been superseded.
See also
AJAX furnace
References
Sources
External links
Steelmaking
Copper
Smelting
Swedish inventions | Kaldo converter | [
"Chemistry"
] | 989 | [
"Copper processes",
"Metallurgical processes",
"Steelmaking",
"Smelting"
] |
53,774,381 | https://en.wikipedia.org/wiki/Remnant%20cholesterol | Remnant cholesterol, also known as remnant lipoprotein and triglyceride-rich lipoprotein cholesterol is an atherogenic lipoprotein composed primarily of very low-density lipoprotein (VLDL) and intermediate-density lipoprotein (IDL) with chylomicron remnants. Elevated remnant cholesterol is associated with increased risk of atherosclerotic cardiovascular disease and stroke.
Definition
Remnant cholesterol is the cholesterol content of triglyceride-rich lipoproteins, which consist of very low-density lipoproteins and intermediate-density lipoproteins with chylomicron remnants. Remnant cholesterol is primarily chylomicron and VLDL, and each remnant particle contains about 40 times more cholesterol than LDL.
Remnant cholesterol corresponds to all cholesterol not found in high-density lipoprotein (HDL-C) and low-density lipoprotein (LDL-C). It is calculated as total cholesterol minus HDL-C and LDL-C.
Health effects
Elevated remnant cholesterol is associated with an increased risk of atherosclerotic cardiovascular disease, chronic inflammation, myocardial infarction and stroke. Remnant cholesterol is especially predictive of coronary artery disease in patients with normal total cholesterol.
High plasma remnant cholesterol is associated with increased plasma triglyceride levels. Hypertriglyceridemia is characteristic of high plasma remnant cholesterol, but persons with high plasma triglycerides without high remnant cholesterol rarely have coronary artery disease.
Remnant cholesterol has about twice the association with ischemic heart disease as LDL cholesterol. Although remnant cholesterol tends to be higher in people who are overweight (high body mass index), normal-weight persons with high remnant cholesterol tend to have a higher risk of myocardial infarction.
Lowering remnant cholesterol
Vupanorsen, an ANGPTL3 inhibitor has been shown to lower remnant cholesterol up to 59%.
See also
Chylomicron remnant
Lipid profile
References
External links
Study Suggests 'Remnant Cholesterol' As Stand-alone Risk for Heart Attack and Stroke
Cardiology
Lipid disorders
Lipoproteins | Remnant cholesterol | [
"Chemistry"
] | 489 | [
"Lipid biochemistry",
"Lipoproteins"
] |
53,776,011 | https://en.wikipedia.org/wiki/Kaisa%20Matom%C3%A4ki | Kaisa Sofia Matomäki (born April 30, 1985) is a Finnish mathematician specializing in number theory. Since April 2023, she is a full professor in the Department of Mathematics and Statistics, University of Turku, Turku, Finland. Her research includes results on the distribution of multiplicative functions over short intervals of numbers; for instance, she showed that the values of the Möbius function are evenly divided between +1 and −1 over short intervals. These results, in turn, were among the tools used by Terence Tao to prove the Erdős discrepancy problem.
Awards and honors
Kaisa Matomäki, along with Maksym Radziwill of McGill University, Canada, was awarded the SASTRA Ramanujan Prize for 2016. The Prize was established in 2005 and is awarded annually for outstanding contributions by young mathematicians to areas influenced by Srinivasa Ramanujan.
The citation for the 2016 SASTRA Ramanujan Prize is as follows: "Kaisa Matomäki and Maksym Radziwill are jointly awarded the 2016 SASTRA Ramanujan Prize for their deep and far reaching contributions to several important problems in diverse areas of number theory and especially for their spectacular collaboration which is revolutionizing the subject. The prize recognizes that in making significant improvements over the works of earlier stalwarts on long standing problems, they have introduced a number of innovative techniques. The prize especially recognizes their collaboration starting with their 2015 joint paper in Geometric and Functional Analysis which led to their 2016 paper in the Annals of Mathematics in which they obtain amazing results on multiplicative functions in short intervals, and in particular a stunning result on the parity of the Liouville lambda function on almost all short intervals - a paper that is expected to change the subject of multiplicative functions in a major way. The prize notes also the very recent joint paper of Matomäki, Radziwill and Tao announcing a significant advance in the case k = 3 towards a conjecture of Chowla on the values of the lambda function on sets of k consecutive integers. Finally the prize notes, that Matomäki and Radziwill, through their impressive array of deep results and the powerful new techniques they have introduced, will strongly influence the development of analytic number theory in the future."
With Radziwill, she is one of five winners of the 2019 New Horizons Prize for Early-Career Achievement in Mathematics, associated with the Breakthrough Prize in Mathematics. She is one of the 2020 winners of the EMS Prize. She was awarded the 2021 Ruth Lyttle Satter Prize by the American Mathematical Society "for her work (much of it joint with Maksym Radziwiłł) opening up the field of multiplicative functions in short intervals in a completely unexpected and very fruitful way, and in particular in their breakthrough paper, 'Multiplicative Functions in Short Intervals' (Annals of Mathematics 183 2016, 1015–1056)." For 2023 she received the Cole Prize in Number Theory of the AMS.
She was elected to the Academia Europaea in 2021.
Education and career
Kaisa Matomäki was born in Nakkila, Finland, on 30 April 1985. She attended high school in Valkeakoski, Finland and won the First Prize in the national mathematics competition for Finnish high school students. She did her Masters at the University of Turku and received the Ernst Lindelof Award for the best Masters Thesis in mathematics in Finland in 2005. After completing her PhD at the Royal Holloway College of the University of London in 2009 under the direction of Professor Glyn Harman, she returned to Turku where she worked as an associate professor and as Academy Research Fellow. She was made a full professor in April 2023.
Personal life
Kaisa Matomäki is married to Pekka Matomäki, who is also a mathematician specializing in applied mathematics. They have three children. Currently they live in Lieto, close to Turku.
References
External links
Homepage of Kaisa Matomäki
Living people
1985 births
People from Nakkila
Number theorists
Recipients of the SASTRA Ramanujan Prize
Finnish mathematicians
Women mathematicians
International Mathematical Olympiad participants
Members of Academia Europaea | Kaisa Matomäki | [
"Mathematics"
] | 856 | [
"Number theorists",
"Number theory"
] |
50,935,039 | https://en.wikipedia.org/wiki/Width%20across%20flats | Width across flats is the distance between two parallel surfaces on the head of a screw, bolt or nut.
The width across flats will define the size of the spanner or wrench needed.
Spanner size
The width across flats indicates the nominal "size" of the spanner. The size is imprinted on the spanners in millimeter values or inch sizes with intermediate sizes in fractions (older British and current US spanners).
The two systems are in general not compatible, which can result in rounding of nuts and bolts (i.e. using a spanner in place of a ). A few sizes are close enough to interchange for most purposes, such as
19 mm (close to ),
8 mm (close to ) and
4 mm (close to ).
In reality, a wrench with a width across the flats of exactly 15 mm would fit too tightly to use on a bolt with a width across the flats of 15 mm. The tolerances necessary to make the tools usable are listed in documents such as ASME/ANSI B18.2.2 for U.S. standards. For instance, a bolt for a 1-inch nominal diameter thread might have flats that are 1.5 inches apart. The wrench for this bolt should have flats that are between 1.508 and 1.520 inches apart to allow for a little extra space.
Width across flats
The width across flats of the fastener (for example screws, nuts, clamps) is nominally the same as that on the tool. The table below shows dimensions of metric spanners for selected sizes of metric threads. Note that with ISO 272 1982 the width across flats for M10, M12, M14 and M22 were changed from 17, 19, 22 and 32 mm respectively to the current standard.
Widths for bicycles
In addition to general industry standards, there are special thread standards, such as bicycle threads according to DIN 79012.
See also
ISO metric screw thread
References
Screws
Wrenches
Measurement | Width across flats | [
"Physics",
"Mathematics"
] | 407 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
50,937,470 | https://en.wikipedia.org/wiki/Eosinophilic%20myocarditis | Eosinophilic myocarditis is inflammation in the heart muscle that is caused by the infiltration and destructive activity of a type of white blood cell, the eosinophil. Typically, the disorder is associated with hypereosinophilia, i.e. an eosinophil blood cell count greater than 1,500 per microliter (normal 100 to 400 per microliter). It is distinguished from non-eosinophilic myocarditis, which is heart inflammation caused by other types of white blood cells, i.e. lymphocytes and monocytes, as well as the respective descendants of these cells, NK cells and macrophages. This distinction is important because the eosinophil-based disorder is due to a particular set of underlying diseases and its preferred treatments differ from those for non-eosinophilic myocarditis.
Eosinophilic myocarditis is often viewed as a disorder that has three progressive stages. The first stage of eosinophilic myocarditis involves acute inflammation and cardiac cell necrosis (i.e. areas of dead cells); it is dominated by symptoms characterized as the acute coronary syndrome such as angina, heart attack and/or congestive heart failure. The second stage is a thrombotic stage wherein the endocardium (i.e. interior wall) of the diseased heart forms blood clots which break off, travel in, and block blood through systemic or pulmonary arteries; this stage may dominate the initial presentation in some individuals. The third stage is a fibrotic stage wherein scarring replaces damaged heart muscle tissue to cause a clinical presentation dominated by a poorly contracting heart and cardiac valve disease. Perhaps less commonly, eosinophilic myocarditis, eosinophilic thrombotic myocarditis, and eosinophilic fibrotic myocarditis are viewed as three separate but sequentially linked disorders in a spectrum of disorders termed eosinophilic cardiac diseases. The focus here is on eosinophilic myocarditis as a distinct disorder separate from its thrombotic and fibrotic sequelae.
Eosinophilic myocarditis is a rare disorder. It is usually associated with, and considered secondary to, an underlying cause for the pathological behavior of the eosinophils such a toxic reaction to a drug (one of its more common causes in developed nations), the consequence of certain types of parasite and protozoan infections (a more common cause of the disorder in areas with these infestations), or the result of excessively high levels of activated blood eosinophils due to a wide range of other causes. The specific treatment (i.e. treatment other than measures to support the cardiovascular system) of eosinophilic myocarditis differs from the specific treatment of other forms of myocarditis in that it is focused on relieving the underlying reason for the excessively high numbers and hyperactivity of eosinophils as well as on inhibiting the pathological actions of these cells.
Signs and symptoms
Symptoms in eosinophilic myocarditis are highly variable. They tend to reflect the many underlying disorders causing eosinophil dysfunction as well as the widely differing progression rates of cardiac damage. Before cardiac symptoms are detected, some 66% of cases have symptoms of a common cold and 33% have symptoms of asthma, rhinitis, urticarial, or other allergic disorder. Cardiac manifestations of eosinophilic myocarditis range from none to life-threatening conditions such as cardiogenic shock or sudden death due to abnormal heart rhythms. More commonly the presenting cardiac symptoms of the disorder are the same as those seen in other forms of heart disease: chest pain, shortness of breath, fatigue, chest palpitations, light headedness, and syncope. In its most extreme form, however, eosinophilic myocarditis can present as acute necrotizing eosinophilic myocarditis, i.e. with symptoms of chaotic and potentially lethal heart failure and heart arrhythmias. This rarest form of the disorder reflects a rapidly progressive and extensive eosinophilic infiltration of the heart that is accompanied by massive myocardial cell necrosis.
Hypereosinophilia (i.e. blood eosinophil counts at or above 1,500 per microliter) or, less commonly, eosinophilia (counts above 500 but below 1,500 per microliter) are found in the vast majority of cases of eosinophilic myocarditis and are valuable clues that point to this rather than other types of myocarditis or myocardial injuries. However, elevated blood eosinophil counts may not occur during the early phase of the disorder. Other, less specific laboratory findings implicate a cardiac disorder but not necessarily eosinophilic myocarditis. These include elevations in blood markers for systemic inflammation (e.g. C reactive protein, erythrocyte sedimentation rate), elevations in blood markers for cardiac injury (e.g. creatine kinase, troponins); and abnormal electrocardiograms ( mostly ST segment-T wave abnormalities).
Cause
There are many causes of eosinophilia that may underlie eosinophilic myocarditis. These causes are classified as primary (i.e. a defect intrinsic to the eosinophil cell line), secondary (induced by an underlying disorder that stimulates the proliferation and activation of eosinophils), or idiopathic (i.e. unknown cause). Non-idiopathic causes of the disorder are sub-classified into various forms of allergic, autoimmune, infectious, or malignant diseases and hypersensitivity reactions to drugs, vaccines, or transplanted hearts. While virtually any cause for the elevation and activation of blood eosinophils must be considered as a potential cause for eosinophilic myocarditis, the following list gives the principal types of eosinophilia known or thought to underlie the disorder.
Primary conditions that may lead to eosinophilic myocarditis are:
Clonal hypereosinophilia.
Chronic eosinophilic leukemia.
The idiopathic hypereosinophilic syndrome.
Secondary conditions that may lead to eosinophilic myocarditis are:
Infections agents:
Parasitic worms: various Ascaris, Strongyloides, Schistosoma, filaria, Trematoda, and Nematode species. Parasitic infestations often cause significant heart valve disease along with myocarditis and the disorder in this setting is sometimes termed Tropical endomyocardial fibrosis. While commonly considered to be due to the cited parasites, this particular form of eosinophilic myocarditis may more often develop in individuals with other disorders, e.g. malnutrition, dietary toxins, and genetic predisposition, in addition to or in place of roundworm infestation.
Infections by protozoa: various Toxoplasma gondii, Trypanosoma cruzi, trichinella spiralis, Entamoeba, and Echinococcus species.
Viruses: While some viral infections (e.g. HIV) have been considered causes of eosinophilic endocarditis, a study of 20 patients concluded that viral myocarditis lacks the characteristic of eosinophil-induced damage in hearts taken during cardiac transplantation.
Allergic and autoimmune diseases such as severe asthma, rhinitis, or urticarial, chronic sinusitis, aspirin-exacerbated respiratory disease, allergic bronchopulmonary aspergillosis, chronic eosinophilic pneumonia, Kimura's disease, polyarteritis nodosa, eosinophilic granulomatosis with polyangiitis, and rejection of transplanted hearts.
Malignancies and/or premalignant hematologic conditions not due to a primary disorder in eosinophils such as Gleich's syndrome, Lymphocyte-variant hypereosinophilia Hodgkin disease, certain T-cell lymphomas, acute myeloid leukemia, the myelodysplastic syndromes, systemic mastocytosis, chronic myeloid leukemia, polycythemia vera, essential thrombocythemia, myelofibrosis, chronic myelomonocytic leukemia, and T-lymphoblastic leukemia/lymphoma-associated or myelodysplastic–myeloproliferative syndrome-associated eosinophilias; IgG4-related disease and Angiolymphoid hyperplasia with eosinophilia as well as non-hematologic cancers such as solid tumors of the lung, gastrointestinal tract, and genitourinary tract.
Hypersensitivity reactions to agents include
Antibiotics/anti-viral agents: various penicillins (e.g. penicillin, ampicillin), cephalosporins (e.g. cephalosporin), tetracyclins (e.g. tetracycline), sulfonamides (e.g. sulfadiazine, sulfafurazole), sulfonylureas, antituburcular drugs (e.g. isoniazid, 4-aminosalicylic acid), linezolid, amphotericin B, chloramphenicol, streptomycin, dapsone, nitrofurantoin, metronidazole, nevirapine, efavirenz, abacavir, nevirapine.
Anticonvulsants/Antipsychotics/antidepressants: phenindione, phenytoin, phenobarbital, lamotrigine, lamotrigine, clozapine, valproic acid, carbamazepine, desipramine, fluoxetine, amitriptyline, olanzapine.
Anti-inflammatory agents: ibuprofen, indomethacin, phenylbutazone, oxyphenbutazone, acetazolamide, piroxicam, diclofenac.
Diuretics: hydrochlorothiazide, spironolactone, chlortalidone.
ACE inhibitors: captopril, enalapril.
Other drugs: digoxin, ranitidine, lenalidomide, methyldopa, interleukin 2, dobutamine, acetazolamide.
Contaminants: Unidentified contaminants in rapeseed oil cause the toxic oil syndrome and in commercial batches of the amino acid, L-tryptophan, cause the eosinophilia–myalgia syndrome.
Vaccinations: Tetanus toxoid, smallpox, and diphtheria/pertussis/tetanus vaccinations.
DRESS syndrome
The DRESS syndrome (Drug Reaction with Eosinophilia and Systemic Symptoms) is a severe immunological drug reaction. It differs from other drug reactions in that it: a) is caused by a particular set of drugs; b) typically occurs after a delay of 2 to 8 weeks following intake of an offending drug; c) presents with a specific set of signs and symptoms (i.e. modest or extreme elevations in blood eosinophil and atypical lymphocyte counts; acute onset of a skin rash; lymphadenopathy; fever; neuralgia; and involvement of at least one internal organ such as the liver, lung, or heart; d) develops in individuals with particular genetic predispositions; and e) involves reactivation of latent viruses, most commonly human herpesvirus 6 or more rarely human herpes virus 5 (i.e. human cytomegalovirus), human herpesvirus 7, and human herpesvirus 4 (i.e. Epstein–Barr virus). These viruses usually become dormant after infecting humans but under special circumstances, such as drug intake, are reactivated and may contribute to serious diseases such as the DRESS syndrome.
Pathophysiology
Eosinophils normally function to neutralize invading microbes, primarily parasites but also certain types of fungi and viruses. In conducting these functions, eosinophils normally occupy the gastrointestinal tract, respiratory tract, and skin where they produce and release on demand a range of toxic reactive oxygen species (e.g. hypobromite, hypobromous acid, superoxide, and peroxide) and also release on demand a preformed armamentarium of chemical signals including cytokines, chemokines, growth factors, lipid mediators (e.g. leukotrienes, prostaglandins, platelet activating factor, 5-oxo-eicosatetraenoic acid), and toxic proteins (e.g. metalloproteinases, major basic protein, eosinophil cationic protein, eosinophil peroxidase, and eosinophil-derived neurotoxin). These agents serve to orchestrate robust inflammatory responses that destroy invading microorganisms. Eosinophils also participate in transplant rejection, Graft-versus-host disease, the destruction or walling off of foreign objects, and the killing of cancer cells. In conducting these functions, eosinophils enter tissues that they do not normally occupy.
When overproduced and over-activated, such as in cases of eosinophilic myocarditis, eosinophils behave as though they were attacking a foreign or malignant tissue: they enter a seemingly normal organ such as the heart, misdirect their reactive oxygen species and armamentarium of preformed molecules toward seemingly normal tissue such as heart muscle, and thereby produce serious damage such as heart failure. Animal model studies suggest reasons why eosinophils are directed to and injure the heart muscle. Mice made hypereosinophilic by the forced overexpression of an interleukin-5 transgene (interleukin 5 stimulates eosinophil proliferation, activation, and migration) develop eosinophilic myocarditis. A similar eosinophilic endocarditis occurs in mice immunized with the cardiac muscle protein, mouse myosin. In the latter model, endocarditis is reduced by inhibiting the cytokine interleukin-4 or eosinophils and is exacerbated by concurrently blocking two cytokines, interferon gamma and interleukin-17A. Finally, certain eosinophil-attracting agents, viz., eotaxins, are elevated in the cardiac tissue of myosin-immunized mice that are concurrently depleted of interferon-gamma and interleukin-17A. Eotaxins are also elevated in the cardiac muscle biopsy specimens of individuals with eosinophilic myocarditis compared to their levels in non-eosinophilic myocarditis. These findings suggest that eosinophilic myocarditis is caused by the abnormal proliferation and activation of eosinophils and that their directional migration into the heart is evoked by a set of cytokines and chemoattractants in mice and possibly humans.
Diagnosis
In eosinophilic myocarditis, echocardiography typically gives non-specific and only occasional findings of endocardium thickening, left ventricular hypertrophy, left ventricle dilation, and involvement of the mitral and/or tricuspid valves. However, in acute necrotizing eosinophilic myocarditis, echocardiography usually gives diagnostically helpful evidence of a non-enlarged heart with a thickened and poorly contracting left ventricle. Gadolinium-based cardiac magnetic resonance imaging is the most useful non-invasive procedure for diagnosing eosinophilic myocarditis. It supports this diagnosis if it shows at least two of the following abnormalities: a) an increased signal in T2-weighted images; b) an increased global myocardial early enhancement ratio between myocardial and skeletal muscle in enhanced T1 images and c) one or more focal enhancements distributed in a non-vascular pattern in late enhanced T1-weighted images. Additionally, and unlike in other forms of myocarditis, eosinophilic myocarditis may also show enhanced gadolinium uptake in the sub-endocardium. However, the only definitive test for eosinophilic myocarditis is cardiac muscle biopsy showing the presence of eosinophilic infiltration. Since the disorder may be patchy, multiple tissue samples taken during the procedure improve the chances of uncovering the pathology but in any case, negative results do not exclude the diagnosis.
Eosinophilic coronary periarteritis
Eosinophilic coronary periarteritis is an extremely rare heart disorder caused by extensive eosinophilic infiltration of the adventitia and periadventitia, i.e. the soft tissues, surrounding the coronary arteries. The intima, tunica media, and tunica intima layers of these arteries remain intact and are generally unaffected. Thus, this disorder is characterized by episodes of angina, particularly Prinzmetal's angina, and chaotic heart arrhythmias which may lead to sudden death. The disorder is considered distinct from eosinophilic myocarditis as well as other forms of inflammatory arterial disorders in that it is limited to the coronary artery system.
Treatment
Due to its rarity, no comprehensive treatment studies on eosinophilic myocarditis have been conducted. Small studies and case reports have directed efforts towards: a) supporting cardiac function by relieving heart failure and suppressing life-threatening abnormal heart rhythms; b) suppressing eosinophil-based cardiac inflammation; and c) treating the underlying disorder. In all cases of symptomatic eosinophilic myocarditis that lack specific treatment regimens for the underlying disorder, available studies recommend treating the inflammatory component of this disorder with non-specific immunosuppressive drugs, principally high-dosage followed by slowly tapering to a low-dosage maintenance corticosteroid regimens. It is recommended that affected individuals who fail this regimen or present with cardiogenic shock be treated with other non-specific immunosuppressive drugs viz., azathioprine or cyclophosphamide, as adjuncts to, or replacements for, corticosteroids. However, individuals with an underlying therapeutically accessible disease should be treated for this disease; in seriously symptomatic cases, such individuals may be treated concurrently with a corticosteroid regimen. Examples of diseases underlying eosinophilic myocarditis that are recommended for treatments directed at the underlying disease include
Infectious agents: specific drug treatment of helminth and protozoan infections typically take precedence over non-specific immunosuppressive therapy, which, if used without specific treatment, could worsen the infection. In moderate-to-severe cases, non-specific immunosuppression is used in combination with specific drug treatment.
Toxic reactions to ingested agents: discontinuance of the ingested agent plus corticosteroids or other non-specific immunosuppressive regimens.
Clonal eosinophilia caused by mutations in genes that are highly susceptible to tyrosine kinase inhibitors such as PDGFRA, PDGFRB, or possibly FGFR1: first-generation tyrosine kinase inhibitors (e.g. imatinib) are recommended for the former two mutations; a later generation tyrosine kinase inhibitors, ponatinib, alone or combined with bone marrow transplantation, may be useful for treating the FGFR1 mutations.
Clonal hypereosinophilia due to mutations in other genes or primary malignancies: specific treatment regimens used for these pre-malignant or malignant diseases may be more useful and necessary than non-specific immunosuppression.
Allergic and autoimmune diseases: non-specific treatment regimens used for these diseases may be useful in place of a simple corticosteroid regimen. For example, eosinophilic granulomatosis with polyangiitis can be successfully treated with mepolizumab.
Idiopathic hypereosinophilic syndrome and lymphocyte-variant hypereosinophilia: corticosteroids; for individuals with these hypereosinophilias that are refractory to or breakthrough corticosteroid therapy and individuals requiring corticosteroid-sparing therapy, recommended alternative drug therapies include hydroxyurea, Pegylated interferon-α, and either one of two tyrosine kinase inhibitors viz., imatinib and mepolizumab).
Prognosis
The prognosis of eosinophilic myocarditis is anywhere from rapidly fatal to extremely chronic or non-fatal. Progression at a moderate rate over many months to years is the most common prognosis. In addition to the speed of inflammation-based heart muscle injury, the prognosis of eosinophilic myocarditis may be dominated by that of its underlying cause. For example, an underlying malignant cause for eosinophilia may be survival-limiting.
History
In 1936, the famed Swiss physician Wilhem Löffler first described heart damage that appeared due to massive cardiac eosinophil infiltrations and was associated with excessively high levels of blood eosinophils. Subsequent cases of this disorder, termed Loeffler endocarditis, were found to occur in about 20% of individuals diagnosed with the hypereosinophilic syndrome. Loeffler's and the latter cases had pathological features of eosinophil infiltrations not only into the heart's myocardium but also its epicardium (i.e. lining of the heart chambers). Although eosinophilic myocarditis due to other underlying causes may show little or no eosinophil infiltrations into the endocardium, Loeffler endocarditis is considered an important form of the disorder.
References
Immune system disorders
Hypersensitivity
Allergology
Parasitism
Drug-induced diseases
Monocyte and granulocyte disorders
Heart diseases | Eosinophilic myocarditis | [
"Chemistry",
"Biology"
] | 4,815 | [
"Drug-induced diseases",
"Parasitism",
"Symbiosis",
"Drug safety"
] |
50,937,678 | https://en.wikipedia.org/wiki/Glossary%20of%20electrical%20and%20electronics%20engineering | This glossary of electrical and electronics engineering is a list of definitions of terms and concepts related specifically to electrical engineering and electronics engineering. For terms related to engineering in general, see Glossary of engineering.
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
X
Y
Z
See also
Glossary of engineering
Glossary of civil engineering
Glossary of mechanical engineering
Glossary of structural engineering
References
Electrical-engineering-related lists
Electronic engineering
electrical and electronics engineering
Electrical and electronics engineering
Wikipedia glossaries using description lists | Glossary of electrical and electronics engineering | [
"Technology",
"Engineering"
] | 112 | [
"Electrical engineering",
"Electronic engineering",
"Electrical-engineering-related lists",
"Computer engineering"
] |
50,943,589 | https://en.wikipedia.org/wiki/Mayer%27s%20relation | In the 19th century, German chemist and physicist Julius von Mayer derived a relation between the molar heat capacity at constant pressure and the molar heat capacity at constant volume for an ideal gas. Mayer's relation states that
where is the molar heat at constant pressure, is the molar heat at constant volume and is the gas constant.
For more general homogeneous substances, not just ideal gases, the difference takes the form,
(see relations between heat capacities), where is the molar volume, is the temperature, is the thermal expansion coefficient and is the isothermal compressibility.
From this latter relation, several inferences can be made:
Since the isothermal compressibility is positive for nearly all phases, and the square of thermal expansion coefficient is always either a positive quantity or zero, the specific heat at constant pressure is nearly always greater than or equal to specific heat at constant volume: There are no known exceptions to this principle for gases or liquids, but certain solids are known to exhibit negative compressibilities and presumably these would be (unusual) cases where .
For incompressible substances, and are identical. Also for substances that are nearly incompressible, such as solids and liquids, the difference between the two specific heats is negligible.
As the absolute temperature of the system approaches zero, since both heat capacities must generally approach zero in accordance with the Third Law of Thermodynamics, the difference between and also approaches zero. Exceptions to this rule might be found in systems exhibiting residual entropy due to disorder within the crystal.
References
Thermodynamic equations | Mayer's relation | [
"Physics",
"Chemistry"
] | 324 | [
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics"
] |
50,946,947 | https://en.wikipedia.org/wiki/First%20law%20of%20thermodynamics%20%28fluid%20mechanics%29 | In physics, the first law of thermodynamics is an expression of the conservation of total energy of a system. The increase of the energy of a system is equal to the sum of work done on the system and the heat added to that system:
where
is the total energy of a system.
is the work done on it.
is the heat added to that system.
In fluid mechanics, the first law of thermodynamics takes the following form:
where
is the Cauchy stress tensor.
is the flow velocity.
and is the heat flux vector.
Because it expresses conservation of total energy, this is sometimes referred to as the energy balance equation of continuous media. The first law is used to derive the non-conservation form of the Navier–Stokes equations.
Note
Where
is the pressure
is the identity matrix
is the deviatoric stress tensor
That is, pulling is positive stress and pushing is negative stress.
Compressible fluid
For a compressible fluid the left hand side of equation becomes:
because in general
Integral form
That is, the change in the internal energy of the substance within a volume is the negative of the amount carried out of the volume by the flow of material across the boundary plus the work done compressing the material on the boundary minus the flow of heat out through the boundary. More generally, it is possible to incorporate source terms.
Alternative representation
where is specific enthalpy, is dissipation function and is temperature. And where
i.e. internal energy per unit volume equals mass density times the sum of: proper energy per unit mass, kinetic energy per unit mass, and gravitational potential energy per unit mass.
i.e. change in heat per unit volume (negative divergence of heat flow) equals the divergence of heat conductivity times the gradient of the temperature.
i.e. divergence of work done against stress equals flow of material times divergence of stress plus stress times divergence of material flow.
i.e. stress times divergence of material flow equals deviatoric stress tensor times divergence of material flow minus pressure times material flow.
i.e. enthalpy per unit mass equals proper energy per unit mass plus pressure times volume per unit mass (reciprocal of mass density).
Alternative form data
left hand side of Navier–Stokes equations minus body force (per unit volume) acting on fluid.
this relation is derived using this relationship which is alternative form of continuity equation
See also
Clausius–Duhem inequality
Continuum mechanics
First law of thermodynamics
Material derivative
Incompressible flow
References
Thermodynamics
Fluid mechanics | First law of thermodynamics (fluid mechanics) | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 527 | [
"Civil engineering",
"Fluid mechanics",
"Thermodynamics",
"Dynamical systems"
] |
45,552,819 | https://en.wikipedia.org/wiki/Nano%20differential%20scanning%20fluorimetry | NanoDSF is a type of differential scanning fluorimetry (DSF) method used to determine conformational protein stability by employing intrinsic tryptophan or tyrosine fluorescence, as opposed to the use of extrinsic fluorogenic dyes that are typically monitored via a qPCR instrument. A nanoDSF assay is also known as a type of Thermal Shift Assay.
Protein stability is typically addressed by thermal or chemical unfolding experiments. In thermal unfolding experiments, a linear temperature ramp is applied to unfold proteins, whereas chemical unfolding experiments use chemical denaturants in increasing concentrations. The thermal stability of a protein is typically described by the 'melting temperature' or 'Tm', at which 50% of the protein population is unfolded, corresponding to the midpoint of the transition from folded to unfolded.
In contrast to conventional DSF methods, nanoDSF uses tryptophan or tyrosine fluorescence to monitor protein unfolding. Both the fluorescence intensity and the fluorescence maximum strongly depend on the close chemical environment of the tryptophan. Typically, interior tryptophan residues in a more hydrophobic environment exhibit a notable emission red shift from approximately 330 nm to 350 nm upon protein unfolding and exposure to water. Quantification of these fluorescence wavelength shifts at various temperature intervals yields a measurement of Tm. Accepted methods to detect and quantify the fluorescence wavelength shift include measuring the intensity at a single wavelength, computing a ratio of the intensity at two wavelengths (typically 330 nm and 350 nm), or calculating the barycentric mean (BCM) by measuring the center of mass of the fluorescence waveform. The latter BCM method takes advantage of the entire UV-fluorescence spectrum, thus allowing for flexibility when auto-fluorescent small molecules are present.
Applications of nanoDSF include protein or antibody engineering, membrane protein research, quality control and formulation development, and ligand binding. NanoDSF has also been utilized to rapidly evaluate the melting points of enzyme libraries for biotechnological applications.
Currently there are at least four instruments on the market that can measure fluorescence wavelength shifts in a high-throughput manner while heating the samples through a defined temperature ramp. These instruments employ either proprietary quartz capillaries, cartridges, or plates or generic high-throughput 384-well plastic plates for sample analysis.
Applications
The nanoDSF technology was used to confirm on-target binding of BI-3231 to HSD17B13 and to elucidate its uncompetitive mode of inhibition with regards to NAD+.
NanoDSF was used to compare the thermal stability of a matched set of anti-CD20 antibodies representing a range of variants. The results revealed a spectrum of activities.
References
Further reading
Biochemistry methods
Protein methods
Biophysics
Molecular biology
Laboratory techniques | Nano differential scanning fluorimetry | [
"Physics",
"Chemistry",
"Biology"
] | 580 | [
"Biochemistry methods",
"Applied and interdisciplinary physics",
"Protein methods",
"Protein biochemistry",
"Biophysics",
"nan",
"Molecular biology",
"Biochemistry"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.