id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
34,265,825 | https://en.wikipedia.org/wiki/Geroch%27s%20splitting%20theorem | In the theory of causal structure on Lorentzian manifolds, Geroch's theorem or Geroch's splitting theorem (first proved by Robert Geroch) gives a topological characterization of globally hyperbolic spacetimes.
The theorem
A Cauchy surface can possess corners, and thereby need not be a differentiable submanifold of the spacetime; it is however always continuous (and even Lipschitz continuous). By using the flow of a vector field chosen to be complete, smooth, and timelike, it is elementary to prove that if a Cauchy surface is -smooth then the spacetime is -diffeomorphic to the product , and that any two such Cauchy surfaces are -diffeomorphic.
Robert Geroch proved in 1970 that every globally hyperbolic spacetime has a Cauchy surface , and that the homeomorphism (as a -diffeomorphism) to can be selected so that every surface of the form is a Cauchy surface and each curve of the form is a continuous timelike curve.
Various foundational textbooks, such as George Ellis and Stephen Hawking's The Large Scale Structure of Space-Time and Robert Wald's General Relativity, asserted that smoothing techniques allow Geroch's result to be strengthened from a topological to a smooth context. However, this was not satisfactorily proved until work of Antonio Bernal and Miguel Sánchez in 2003. As a result of their work, it is known that every globally hyperbolic spacetime has a Cauchy surface which is smoothly embedded and spacelike. As they proved in 2005, the diffeomorphism to can be selected so that each surface of the form is a spacelike smooth Cauchy surface and that each curve of the form is a smooth timelike curve orthogonal to each surface .
References
Sources
Theorems in general relativity
Lorentzian manifolds
Theorems in mathematical physics | Geroch's splitting theorem | [
"Physics",
"Mathematics"
] | 396 | [
"Mathematical theorems",
"Equations of physics",
"Theorems in general relativity",
"Theorems in mathematical physics",
"Relativity stubs",
"Theory of relativity",
"Mathematical problems",
"Physics theorems"
] |
34,270,640 | https://en.wikipedia.org/wiki/MC21-B | MC21-B is an antibiotic isolated from the O-BC30T strain of a marine bacterium, Pseudoalteromonas phenolica. MC21-B is cytotoxic to human leukaemia cells and human normal dermal fibroblasts.
See also
MC21-A
References
Antibiotics
Biphenyls
Dicarboxylic acids
Benzoic acids
Bromoarenes
Halogen-containing natural products | MC21-B | [
"Biology"
] | 91 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
36,842,000 | https://en.wikipedia.org/wiki/Cheng%20cycle | The Cheng cycle is a thermodynamic cycle which uses a combination of two working fluids, one gas and one steam. It can therefore be considered a combination of the Brayton cycle and the Rankine cycle. It was named for Dr. Dah Yu Cheng.
The company founded by Dr. Cheng has developed systems in partnership with both GM and GE turbine manufacturers to take advantage of the Cheng cycle by modification of existing turbine designs before construction. The Cheng cycle involves the heated exhaust gas from the turbine being used to make steam in a heat recovery steam generator (HRSG). The steam so produced is injected into the gas turbine's combustion chamber to increase power output. The process can be thought of as a parallel combination of the gas turbine Brayton cycle and a steam turbine Rankine cycle. The cycle was invented by Prof. Dah Yu Cheng of the University of Santa Clara who patented it in 1976.
A fully Cheng Cycle design Gas/Steam two fluid flow turbine can achieve a theoretical thermal efficiency of 60% matching or even exceeding many traditional Combined Cycle Gas Turbines that keep the steam Rankine Cycle and gas Brayton Cycle as separate loops.
See also
Combined cycle
Brayton cycle
Rankine cycle
Cogeneration
References
Power engineering
Thermodynamic cycles | Cheng cycle | [
"Physics",
"Chemistry",
"Engineering"
] | 255 | [
"Thermodynamics stubs",
"Energy engineering",
"Thermodynamics",
"Power engineering",
"Electrical engineering",
"Physical chemistry stubs"
] |
36,842,201 | https://en.wikipedia.org/wiki/Antiproton%20Accumulator | The Antiproton Accumulator (AA) was an infrastructure connected to the Proton–Antiproton Collider (SpS) – a modification of the Super Proton Synchrotron (SPS) – at CERN. The AA was built in 1979 and 1980, for the production and accumulation of antiprotons. In the SpS the antiprotons were made to collide with protons, achieving collisions at a center of mass energy of app. 540 GeV (later raised to 630 GeV and finally, in a pulsed mode, to 900 GeV). Several experiments recorded data from the collisions, most notably the UA1 and UA2 experiment, where the W and Z bosons were discovered in 1983.
The concept of the project was developed and promoted by C. Rubbia, for which he received the Nobel prize in 1984. He shared the prize with Simon van der Meer, whose invention of the method of stochastic cooling made large scale production of antiprotons possible for the first time.
Operation
Antiprotons were produced by directing an intense proton beam at a momentum of 26 GeV/c from the Proton Synchrotron (PS) onto a target for production. The emerging burst of antiprotons had a momentum of 3.5 GeV/c, and was selected via a spectrometer, and injected into the AA. The produced antiprotons would have a substantial momentum spread, which was decreased during a 2 s orbit around the AA, using Simon van der Meers method of stochastic cooling. The antiprotons were then trapped using a radiofrequency system, and moved inwards in the orbit to a stacking region. A next burst of antiprotons arrived 2.4 s (the PS cycle time) after the preceding one. This process was repeated during the whole accumulation period, which took about a day. The most intense stack, obtained after many days, would typically contain 5.2·1011 antiprotons.
The dense core of antiprotons was then ejected from the AA, and accelerated to 26 GeV/c, using the PS. Three antiproton bunches were consecutively transferred to the SpS, every 2.4 s. Just before the antiproton transfer, the PS would already have accelerated and transferred three proton bunches circulating the opposite direction to the antiprotons. When three bunches of antiprotons and three bunches of protons have filled the SpS, the bunches were accelerated to 315 GeV, and the beams were circulated for hours. During this time the AA continued to accumulate, to be ready for the next day's transfer.
Antimatter experiments
From the beginning of the project, the potential of physics with low-energy antiprotons was recognized. A Low Energy Antiproton Ring (LEAR) was built and received antiprotons from the AA from 1983 on, for deceleration to as low as 100 MeV/c. The first artificially created antimatter, in the form of anti-Hydrogen, was created in a trapping experiment at LEAR in 1995. However, the first client for antiprotons from the AA had been the Intersecting Storage Rings (ISR), where proton-antiproton collisions were achieved early in 1981.
Upgrade of the Antiproton Accumulation system
To satisfy the need for more antiprotons, the ACOL (Antiproton COLlector) project was conceived in 1983 and implemented in 1986 and 1987. The antiproton production (target and target area) was upgraded; the Antiproton Collector (AC), with an acceptance in transverse and longitudinal phase-space much larger than that of the AA, was built tightly around the AA; and the AA was consequently modified. The AA accumulation rate, previously typically 1011 antiprotons per day, was thus raised by an order of magnitude, to typically 1012.
AC and AA together were referred to as the Antiproton Accumulation Complex (AAC). The AAC was one of the most highly automated complex of accelerators of its time.
After the last run of the SpS, in 1991, LEAR remained the sole client of the AAC, and a simpler way to serve low-energy physics was sought. LEAR was converted to become the Low Energy Ion Ring (LEIR), the AA was dismantled, and the AC was converted to become the Antiproton Decelerator (AD).
See also
UA1 experiment
UA2 experiment
Stochastic cooling
W and Z bosons
Antiproton Collector
Super Proton–Antiproton Synchrotron
References
CERN accelerators
Particle accelerators
Accelerator physics
Particle physics facilities
CERN facilities
CERN | Antiproton Accumulator | [
"Physics"
] | 973 | [
"Accelerator physics",
"Applied and interdisciplinary physics",
"Experimental physics"
] |
36,842,500 | https://en.wikipedia.org/wiki/Brauer%E2%80%93Wall%20group | In mathematics, the Brauer–Wall group or super Brauer group or graded Brauer group for a field F is a group BW(F) classifying finite-dimensional graded central division algebras over the field. It was first defined by as a generalization of the Brauer group.
The Brauer group of a field F is the set of the similarity classes of finite-dimensional central simple algebras over F under the operation of tensor product, where two algebras are called similar if the commutants of their simple modules are isomorphic. Every similarity class contains a unique division algebra, so the elements of the Brauer group can also be identified with isomorphism classes of finite-dimensional central division algebras. The analogous construction for Z/2Z-graded algebras defines the Brauer–Wall group BW(F).
Properties
The Brauer group B(F) injects into BW(F) by mapping a CSA A to the graded algebra which is A in grade zero.
showed that there is an exact sequence
0 → B(F) → BW(F) → Q(F) → 0
where Q(F) is the group of graded quadratic extensions of F, defined as an extension of Z/2 by F*/F*2 with multiplication (e,x)(f,y) = (e + f, (−1)efxy). The map from BW(F) to Q(F) is the Clifford invariant defined by mapping an algebra to the pair consisting of its grade and determinant.
There is a map from the additive group of the Witt–Grothendieck ring to the Brauer–Wall group obtained by sending a quadratic space to its Clifford algebra. The map factors through the Witt group, which has kernel I3, where I is the fundamental ideal of W(F).
Examples
BW(C) is isomorphic to Z/2Z. This is an algebraic aspect of Bott periodicity of period 2 for the unitary group. The 2 super division algebras are C, C[γ] where γ is an odd element of square 1 commuting with C.
BW(R) is isomorphic to Z/8Z. This is an algebraic aspect of Bott periodicity of period 8 for the orthogonal group. The 8 super division algebras are R, R[ε], C[ε], H[δ], H, H[ε], C[δ], R[δ] where δ and ε are odd elements of square −1 and 1, such that conjugation by them on complex numbers is complex conjugation.
Notes
References
Field (mathematics)
Quadratic forms
Super linear algebra | Brauer–Wall group | [
"Physics",
"Mathematics"
] | 565 | [
"Symmetry",
"Super linear algebra",
"Quadratic forms",
"Supersymmetry",
"Number theory"
] |
41,033,059 | https://en.wikipedia.org/wiki/Explicit%20algebraic%20stress%20model | The algebraic stress model arises in computational fluid dynamics. Two main approaches can be undertaken. In the first, the transport of the turbulent stresses is assumed proportional to the turbulent kinetic energy; while in the second, convective and diffusive effects are assumed to be negligible. Algebraic stress models can only be used where convective and diffusive fluxes are negligible, i.e. source dominated flows. In order to simplify the existing EASM and to achieve an efficient numerical implementation the underlying tensor basis plays an important role. The five-term tensor basis that is introduced here tries to combine an optimum of accuracy of the complete basis with the advantages of a pure 2d concept. Therefore a suitable five-term basis is identified. Based on that the new model is designed and validated in combination with different eddy-viscosity type background models.
Integrity basis
In the frame work of single-point closures (Reynolds-stress transport models = RSTM) still provide the best representation of flow physics. Due to numeric requirements an explicit formulation based on a low number of tensors is desirable and was already introduced originally most explicit algebraic stress models are formulated using a 10-term basis:
The reduction of the tensor basis however requires an enormous mathematical effort, to transform the algebraic stress formulation for a given linear algebraic RSTM into a given tensor basis by keeping all important properties of the underlying model. This transformation can be applied to an arbitrary tensor basis. In the present investigations an optimum set of basis tensors and the corresponding coefficients is to be found.
Projection method
The projection method was introduced to enable an approximate solution of the algebraic transport equation of the Reynolds-stresses. In contrast to the approach of the tensor basis is not inserted in the algebraic equation, instead the algebraic equation is projected. Therefore, the chosen basis tensors does not need to form a complete integrity basis. However, the projection will fail if the basis tensor are linear dependent. In the case of a complete basis the projection leads to the same solution as the direct insertion, otherwise an approximate solution in the sense is obtained.
An example
In order to prove, that the projection method will lead to the same solution as the direct insertion, the EASM for two-dimensional flows is derived. In two-dimensional flows only the tensors are independent.
The projection leads then to the same coefficients. This two-dimensional EASM is used as starting point for an optimized EASM which includes three-dimensional effects. For example the shear stress variation in a rotating pipe cannot be predicted with quadratic tensors. Hence, the EASM was extended with a cubic tensor. In order to do not affect the performance in 2D flows, a tensor was chosen that vanish in 2d flows. This offers the concentration of the coefficient determination in 3d flows. A cubic tensor, which vanishes in 3d flow is:
The projection with tensors T(1), T(2), T(3) and T(5) yields then the coefficients of the EASM.
Limitation of Cμ
A direct result of the EASM derivation is a variable formulation of Cμ.As the generators of the extended EASM where chosen to preserve the existing 2D formulation the expression of Cμ remains unchanged:
Ai are the constants of the underlying pressure-strain model.
Since η1 is always positive it might be possible that Cμ becomes singular. Therefore in the first EASM derivation of a regularization was introduced, which prevent a singular by cutting the range of η1. However, Wallin et al. pointed out that the regularization deteriorated the performance of the EASM. In their model the methodology was refined to account for the coefficient.
This leads to a weak non-linear conditional equation for the EASM coefficients and an additional equation for g must be solved. In 3D the equation of g is of 6th order, wherefore a closed solution is only possible in 2D flows, where the equation reduces to 3rd order. In order to circumvent the root finding of a polynomial equation quasi self-consistent approach. He showed that by using a Cμ expression of a realizable linear model instead of the EASM-Cμ expression in the equation of g the same properties of g follows. For a wide range of and the quasi self-consistent approach is almost identical to the fully self-consistent solution. Thus the quality of the EASM is not affected with the advantage of no additional non-linear equation. Since in the projections to determine the EASM coefficients the complexity is reduced by neglecting higher order invariants.
References
Gatski, T.B. and Speziale, C.G., "On explicit algebraic stress models for complex turbulent flows". J. Fluid Mech.
Rung, T., "Entwicklung anisotroper Wirbelzähigkeitsbeziehungen mit Hilfe von Projektionstechniken", PHD-thesis, Technische Universität Berlin, 2000
Taulbee, D.B., "An improved algebraic Reynolds stress model and corresponding nonlinaer stress model", Phys. Fluids, 28, pp 2555–2561, 1992
Lübcke, H., Rung, T. and Thiele, F. "Prediction of the Spreading Mechanism of 3D Turbulent Wall Jets with Explicit Reynolds-Stress Closures", Eng. Turbulence Modelling and Experiments 5, Mallorca, 2002
Wallin, S. and Johansson, A.V., "A new explicit algebraic Reynolds stress turbulence model including an improved near-wall treatment", Flow Modelling and Turbulence Measurements IV
Taulbee, D.B., "An improved algebraic Reynolds stress model and corresponding nonlinear stress model"
Jongen, T. and Gatski, T.B., "General explicit algebraic stress relations and best approximations for three-dimensional flows", Int. J. Engineering Science
Fluid mechanics
Computational fluid dynamics
Numerical analysis
Applied mathematics
Functional analysis | Explicit algebraic stress model | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 1,225 | [
"Functions and mappings",
"Functional analysis",
"Computational fluid dynamics",
"Applied mathematics",
"Mathematical objects",
"Computational mathematics",
"Fluid mechanics",
"Computational physics",
"Civil engineering",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Flu... |
41,034,291 | https://en.wikipedia.org/wiki/PISO%20algorithm | PISO algorithm (Pressure-Implicit with Splitting of Operators) was proposed by Issa in 1986 without iterations and with large time steps and a lesser computing effort. It is an extension of the SIMPLE algorithm used in computational fluid dynamics to solve the Navier-Stokes equations. PISO is a pressure-velocity calculation procedure for the Navier-Stokes equations developed originally for non-iterative computation of unsteady compressible flow, but it has been adapted successfully to steady-state problems.
PISO involves one predictor step and two corrector steps and is designed to satisfy mass conservation using predictor-corrector steps.
Algorithm steps
The algorithm can be summed up as follows:
Set the boundary conditions.
Solve the discretized momentum equation to compute an intermediate velocity field.
Compute the mass fluxes at the cells faces.
Solve the pressure equation.
Correct the mass fluxes at the cell faces.
Correct the velocities on the basis of the new pressure field.
Update the boundary conditions.
Repeat from 3 for the prescribed number of times.
Increase the time step and repeat from 1.
Steps 4 and 5 can be repeated for a prescribed number of times to correct for non-orthogonality.
Predictor step
Guess the pressure field and get velocity field components and using discretized momentum equation. The initial guess for the pressure may or may not be correct.
Corrector step 1Velocity component obtained from predictor step may not satisfy the continuity equation, so we define correction factors p',v',u' for the pressure field and velocity field. Solve the momentum equation by inserting correct pressure field and get the corresponding correct velocity components and .
where ; :correct pressure field and velocity component
:correction in pressure field and correction in velocity components
:guessed pressure field and velocity component
We define as above.
By putting the correct pressure field into the discretized momentum equation we get the correct velocity components and . Once the pressure correction is known we can find the correction components for the velocity: and .
Corrector step 2
In piso another corrector step can be used.
;
;
;
where : are the correct pressure field and the correct velocity components, respectively
and are second corrections to the pressure and velocity field.
Set
where; are correct pressure and velocity field
Advantages and disadvantages
Generally gives more stable results and takes less CPU time but not suitable for all processes.
Suitable numerical schemes for solving the pressure-velocity linked equation.
For laminar backward facing step PISO is faster than SIMPLE but it is slower concerning flow through heated fin.
If momentum and scalar equation have weak or no coupling then PISO is better than SIMPLEC.
PISO is most time effective method
See also
Fluid mechanics
Computational fluid dynamics
Algorithm
SIMPLE algorithm
SIMPLER algorithm
SIMPLEC algorithm
References
An Introduction to Computational Fluid Dynamics The Finite Volume Method, 2/e By Versteeg
Computational Fluid Dynamics for Engineers by Bengt Andersson, Ronnie Andersson, Love Håkansson, Mikael Mortensen, Rahman Sudiyo, Berend van Wachem
Computational Fluid Dynamics in Fire Engineering: Theory, Modelling and Practice by Guan Heng Yeoh, Kwok Kit Yuen
http://openfoamwiki.net/index.php/OpenFOAM_guide/The_PISO_algorithm_in_OpenFOAM
Computational fluid dynamics by T. J. Chung, University of Alabama in Huntsville
Computational method for fluid dynamics by Joel H.Ferziger, Milovan Peric
Solution of the implicitly discretized fluid flow equations by operator-splitting, Journal of Computational Physics 62 by R. Issa
Computational fluid dynamics | PISO algorithm | [
"Physics",
"Chemistry"
] | 734 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
41,036,105 | https://en.wikipedia.org/wiki/Great%20Annihilator | 1E1740.7-2942, or the Great Annihilator, is a Milky Way microquasar, located near the Galactic Center on the sky. It likely consists of a black hole and a companion star. It is one of the brightest X-ray sources in the region around the Galactic Center.
The object was first detected in soft X-rays by the Einstein Observatory, and later detected in hard X-rays by the Soviet Granat space observatory. Followup observations by the SIGMA detector on board Granat showed that the object was a variable emitter of massive amounts of photon pairs at 511 keV, which usually indicates the annihilation of an electron-positron pair. This led to the nickname, "Great Annihilator." Early observations also showed a spectrum similar to that of the Cygnus X-l, a black hole with a stellar companion, which suggested that Great Annihilator was also a stellar mass black hole.
The object also has a radio source counterpart that emits jets approximately 1.5 pc (5 ly) long. These jets are probably synchrotron emission from positron-electron pairs streaming out at high velocities from the source of antimatter. Modeling of the observed precession of these jets gives an object distance of approximately 5 kpc (or 16,000 ly). This means that while the object is likely located along our line of sight towards the center of the Milky Way, it may be closer to us than Sagittarius A*, the black hole at the center of our galaxy.
References
Ophiuchus
Stellar black holes
Microquasars | Great Annihilator | [
"Physics",
"Astronomy"
] | 344 | [
"Black holes",
"Stellar black holes",
"Unsolved problems in physics",
"Constellations",
"Ophiuchus"
] |
41,038,479 | https://en.wikipedia.org/wiki/High-definition%20fiber%20tracking | High definition fiber tracking (HDFT) is a tractography technique where data from MRI scanners is processed through computer algorithms to reveal the detailed wiring of the brain and to pinpoint fiber tracts. Each tract contains millions of neuronal connections. HDFT is based on data acquired from diffusion spectrum imaging and processed by generalized q-sampling imaging. The technique makes it possible to virtually dissect 40 major fiber tracts in the brain. The HDFT scan is consistent with brain anatomy unlike diffusion tensor imaging (DTI). Thus, the use of HDFT is essential in pinpointing damaged neural connections.
History
Traditional DTI uses six diffusivity characteristics to model how water molecules diffuse in brain tissues and makes axonal fiber tracking possible. However, DTI had a major limitation in resolving axons from different tracts intersected and crossed en route to their target. In 2009, Learning Research & Development Center (LRDC) at University of Pittsburgh launched the 2009 Pittsburgh Brain Competition to invite the best research team to work on this problem. A prize of $10,000 was offered to the team that could track optic radiations, and teams from 168 countries took part in the competition. A winning team from Taiwan revealed Meyer's loop, which no other team had successfully tracked. The key of the method was multiple observations of water molecules and improved algorithms to better capture how axons connects brain regions. The technique was further developed as HDFT between the University of Pittsburgh and Carnegie Mellon University.
HDFT is currently used by UPMC neurosurgery department to provide neurosurgical planning, neuro-structural damage assessment, intraoperative navigation, and evaluation of changes and responses to rehabilitation therapy after brain surgery.
Applications
HDFT has been applied to traumatic brain injury (TBI) to identify which brain connections have been broken and which are still intact. HDFT allows neurosurgeons to localize fiber breaks caused by traumatic brain injuries to provide better diagnoses and prognoses. It could also provide an objective way of identifying brain injury, predicting outcome and planning rehabilitation. HDFT can also be used to determine the optimal surgical approach for difficult-to-reach tumors and vascular malformations.
See also
Diffusion MRI (DTI), uses the magnetic properties of water
Functional magnetic resonance imaging (fMRI), measures blood flow to infer neural activity
References
External links
Fiber Tractography Lab
pitt.edu: Concept HDFT
thejns.org: High-definition fiber tracking for assessment of neurological deficit in a case of traumatic brain injury: finding, visualizing, and interpreting small sites of damage Case report (2012-04-30)
upmc.com: New High Definition Fiber Tracking Reveals Damage Caused by Traumatic Brain Injury, Pitt Team Reports (2012-03-02)
hdft.info: HDFT for Connection Disorders
pitt.edu: Featured in this "60 Minutes" feature: Apps for Autism (2011-10-23)
Inventions
Magnetic resonance imaging | High-definition fiber tracking | [
"Chemistry"
] | 607 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
41,038,739 | https://en.wikipedia.org/wiki/Spherical%20category | In category theory, a branch of mathematics, a spherical category is a pivotal category (a monoidal category with traces) in which left and right traces coincide.
Spherical fusion categories give rise to a family of three-dimensional topological state sum models (a particular formulation of a topological quantum field theory), the Turaev-Viro model, or rather Turaev-Viro-Barrett-Westbury model.
References
Category theory | Spherical category | [
"Mathematics"
] | 89 | [
"Functions and mappings",
"Mathematical structures",
"Category theory stubs",
"Mathematical objects",
"Fields of abstract algebra",
"Mathematical relations",
"Category theory"
] |
41,038,878 | https://en.wikipedia.org/wiki/Ross%20Street | Ross Howard Street (born 29 September 1945, Sydney) is an Australian mathematician specialising in category theory.
Biography
Street completed his undergraduate and postgraduate study at the University of Sydney, where his dissertation advisor was Max Kelly. He is an emeritus professor of mathematics at Macquarie University, a fellow of the Australian Mathematical Society (1995), and was elected Fellow of the Australian Academy of Science in 1989. He was awarded the Edgeworth David Medal of the Royal Society of New South Wales in 1977, and the Australian Mathematical Society's George Szekeres Medal in 2012.
References
External links
Personal webpage, maths.mq.edu.au
Living people
Australian mathematicians
Category theorists
Academic staff of Macquarie University
Fellows of the Australian Academy of Science
1945 births | Ross Street | [
"Mathematics"
] | 150 | [
"Category theorists",
"Mathematical structures",
"Category theory"
] |
41,040,671 | https://en.wikipedia.org/wiki/Fiber%20functor | Fiber functors in category theory, topology and algebraic geometry refer to several loosely related functors that generalise the functors taking a covering space to the fiber over a point .
Definition
A fiber functor (or fibre functor) is a loose concept which has multiple definitions depending on the formalism considered. One of the main initial motivations for fiber functors comes from Topos theory. Recall a topos is the category of sheaves over a site. If a site is just a single object, as with a point, then the topos of the point is equivalent to the category of sets, . If we have the topos of sheaves on a topological space , denoted , then to give a point in is equivalent to defining adjoint functorsThe functor sends a sheaf on to its fiber over the point ; that is, its stalk.
From covering spaces
Consider the category of covering spaces over a topological space , denoted . Then, from a point there is a fiber functorsending a covering space to the fiber . This functor has automorphisms coming from since the fundamental group acts on covering spaces on a topological space . In particular, it acts on the set . In fact, the only automorphisms of come from .
With étale topologies
There is an algebraic analogue of covering spaces coming from the étale topology on a connected scheme . The underlying site consists of finite étale covers, which are finite flat surjective morphisms such that the fiber over every geometric point is the spectrum of a finite étale -algebra. For a fixed geometric point , consider the geometric fiber and let be the underlying set of -points. Then,is a fiber functor where is the topos from the finite étale topology on . In fact, it is a theorem of Grothendieck that the automorphisms of form a profinite group, denoted , and induce a continuous group action on these finite fiber sets, giving an equivalence between covers and the finite sets with such actions.
From Tannakian categories
Another class of fiber functors come from cohomological realizations of motives in algebraic geometry. For example, the De Rham cohomology functor sends a motive to its underlying de-Rham cohomology groups .
See also
Topos
Étale topology
Motive (algebraic geometry)
Anabelian geometry
References
External links
SGA 4 and SGA 4 IV
Motivic Galois group - https://web.archive.org/web/20200408142431/https://www.him.uni-bonn.de/fileadmin/him/Lecture_Notes/motivic_Galois_group.pdf
Category theory
Monoidal categories | Fiber functor | [
"Mathematics"
] | 555 | [
"Functions and mappings",
"Mathematical structures",
"Monoidal categories",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations"
] |
41,040,854 | https://en.wikipedia.org/wiki/Upwind%20differencing%20scheme%20for%20convection | The upwind differencing scheme is a method used in numerical methods in computational fluid dynamics for convection–diffusion problems. This scheme is specific for Peclet number greater than 2 or less than −2
Description
By taking into account the direction of the flow, the upwind differencing scheme overcomes that inability of the central differencing scheme. This scheme is developed for strong convective flows with suppressed diffusion effects. Also known as ‘Donor Cell’ Differencing Scheme, the convected value of property at the cell face is adopted from the upstream node.
It can be described by Steady convection-diffusion partial Differential Equation:
Continuity equation:
where is density, is the diffusion coefficient, is the velocity vector, is the property to be computed, is the source term, and the subscripts and refer to the "east" and "west" faces of the cell (see Fig. 1 below).
After discretization, applying continuity equation, and taking source term equals to zero we get
Central difference discretized equation
Lower case denotes the face and upper case denotes node;
, , and refer to the "East," "West," and "Central" cell.
(again, see Fig. 1 below).
Defining variable F as convection mass flux and variable D as diffusion conductance
and
Peclet number (Pe) is a non-dimensional parameter determining the comparative strengths of convection and diffusion
Peclet number:
For a Peclet number of lower value (|Pe| < 2), diffusion is dominant and for this the central difference scheme is used. For other values of the Peclet number, the upwind scheme is used for convection-dominated flows with Peclet number (|Pe| > 2).
For positive flow direction
Corresponding upwind scheme equation:
Due to strong convection and suppressed diffusion
Rearranging equation (3) gives
Identifying coefficients,
For negative flow direction
Corresponding upwind scheme equation:
Rearranging equation (4) gives
Identifying coefficients,
We can generalize coefficients as
Use
Solution in the central difference scheme fails to converge for Peclet number greater than 2 which can be overcome by using an upwind scheme to give a reasonable result.
Therefore the upwind differencing scheme is applicable for Pe > 2 for positive flow and Pe < −2 for negative flow. For other values of Pe, this scheme doesn’t give effective solution.
Assessment
Conservativeness
The upwind differencing scheme formulation is conservative.
Boundedness
As the coefficients of the discretised equation are always positive hence satisfying the requirements for boundedness and also the coefficient matrix is diagonally dominant therefore no irregularities occur in the solution.
Transportiveness
Transportiveness is built into the formulation as the scheme already accounts for the flow direction.
Accuracy
Based on the backward differencing formula, the accuracy is only first order on the basis of the Taylor series truncation error. It gives error when flow is not aligned with grid lines. Distribution of transported properties become marked giving diffusion-like appearance, called as the false diffusion. Refinement of grid serves in overcoming the issue of false diffusion. With decrease in the grid size, false diffusion decrease thus increasing the accuracy.
References
See also
Central differencing scheme
Finite difference
Upwind scheme
Computational fluid dynamics
Numerical differential equations | Upwind differencing scheme for convection | [
"Physics",
"Chemistry"
] | 667 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
41,042,161 | https://en.wikipedia.org/wiki/Calculation%20of%20buoyancy%20flows%20and%20flows%20inside%20buildings | Buoyancy force is the defined as the force exerted on the body or an object when inserted in a fluid. Buoyancy force is based on the basic principle of pressure variation with depth, since pressure increases with depth. Hence buoyancy force arises as pressure on the bottom surface of the immersed object is greater than that at the top.
Flow problems in buildings were studied since 700 B.C. Recent advancements in CFD and CAE have led to comprehensive calculation of buoyancy flows and flows in buildings.
Calculation of buoyant flows and flow inside buildings
Since there is natural driven ventilation resulting from the difference in temperature inside the buildings hence flows inside buildings fall under buoyancy force category. The momentum equation in the direction of gravity should be modeled for buoyant forces resulting from buoyancy. Hence the momentum equation is given by
∂ρv/∂t + V.∇(ρv)= -g((ρ-ρ°) - ∇P+μ∇2v + Sv
In the above equation -g((ρ-ρ°) is the buoyancy term, where ρ° is the reference density.
On discretizing the above equation several instabilities are obtained during the solution process. Hence we use a transient approach as several relaxations are often required in obtaining a steady state solution.
When applied to turbulent flows some additional modifications are to be applied to the calculation of buoyant flows. Hence an additional term is added, as recommended by Rodi(1978) in the k equation of the k- ε model is used below in modelling turbulent buoyant flows. Therefore, the k-equations takes the form
∂ρk/∂t + ∇(ρku)= -g((ρ-ρ°) - ∇(τ∇×k) + G + B - ρε
Where
G= Usual Production or generation term = 2μE.E
B = Generation term related to buoyancy
Also B = βgi (μ/σ) ∂T/∂xi
Where,
T = Temperature
gi = Gravitational acceleration in x-direction
β = Volumetric expansion coefficient = -(1/ρ) ∂ρ/∂T
Hence for turbulent kinetic energy the modeled transport equation is given as
∂ρε/∂t + ∇(ρεu) = ∇(τ∇×k) + C1ε (ε/k)(G+B)(1+C3 Rf ) - C2 ε ρ(ε2/k)
Where,
Rf = Flux Richardson number.
C3 = Additional model constant.
Flux Richardson number as defined by Hossain and Rodi (1976) is Rf = -B/G.
As C3 is close to unity in vertical buoyant shear layers and close zero in horizontal shear layers hence a single value of C3 cannot be used as Rf.
Rf = - Gl/2(G+B)
Where,
Gl = Buoyancy production in lateral energy component.
If we consider the horizontal shear layer where the lateral flow velocity component is in the direction of gravity, the production of buoyancy is given as
Gl = 2B
If we consider the vertical shear layer then the direction of gravity and the lateral component are normal to each other. Hence Gl = 0. Therefore, we obtain
Rf = - B/(B+G) ------------ For horizontal layers
Rf = 0 ------------- For vertical layers
Finally in a given flow if vertical shear stresses are dominant then we can set Rf equal to zero and take C3 = 0.8.
Uses
Buoyancy flow calculation and force calculations are used in successfully predicting the effect of various natural calamities upon buildings, ships, aircraft and other commercial and non-commercial vehicles. They are also used in locating a prominent location for placing the exhaust chimney for the large scale industries. Also the shape of the chimney is obtained keeping in mind the above calculations. They are also used in planning of buildings in coastal area such that the structure is able to sustain floods and strong currents that arise at the coast.
See also
References
Computational fluid dynamics
Buoyancy | Calculation of buoyancy flows and flows inside buildings | [
"Physics",
"Chemistry"
] | 855 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
42,477,276 | https://en.wikipedia.org/wiki/Monin%E2%80%93Obukhov%20similarity%20theory | Monin–Obukhov (M–O) similarity theory describes the non-dimensionalized mean flow and mean temperature in the surface layer under non-neutral conditions as a function of the dimensionless height parameter, named after Russian scientists A. S. Monin and A. M. Obukhov. Similarity theory is an empirical method that describes universal relationships between non-dimensionalized variables of fluids based on the Buckingham π theorem. Similarity theory is extensively used in boundary layer meteorology since relations in turbulent processes are not always resolvable from first principles.
An idealized vertical profile of the mean flow for a neutral boundary layer is the logarithmic wind profile derived from Prandtl's mixing length theory, which states that the horizontal component of mean flow is proportional to the logarithm of height. M–O similarity theory further generalizes the mixing length theory in non-neutral conditions by using so-called "universal functions" of dimensionless height to characterize vertical distributions of mean flow and temperature. The Obukhov length (), a characteristic length scale of surface layer turbulence derived by Obukhov in 1946, is used for non-dimensional scaling of the actual height. M–O similarity theory marked a significant landmark of modern micrometeorology, providing a theoretical basis for micrometeorological experiments and measurement techniques.
The Obukhov length
The Obukhov length is a length parameter for the surface layer in the boundary layer, which characterizes the relative contributions to turbulent kinetic energy from buoyant production and shear production. The Obukhov length was formulated using Richardson's criterion for dynamic stability. It was derived as,
where is the von Kármán constant, friction velocity, turbulent heat flux, and heat capacity. Virtual potential temperature is often used instead of temperature to correct for the effects of pressure and water vapor. can be written as vertical eddy flux,
with and perturbations of vertical velocity and virtual potential temperature, respectively. Therefore, the Obukhov length can also be defined as,
The Obukhov length also acts as a criterion for the static stability of surface layer. When , the surface layer is statically unstable, and when the surface layer is statically stable. The absolute magnitude of indicates the deviation from statically neutral state, with smaller values corresponding to larger deviations from neutral conditions. When is small and , buoyant processes dominate the production of turbulent kinetic energy compared with shear production. By definition, under neutral conditions .
The Obukhov length is used for non-dimensionalization of height in similarity theory.
Governing formulae for similarity relations
M–O similarity theory parameterizes fluxes in the surface layer as a function of the dimensionless length parameter . From the Buckingham Pi theorem of dimensional analysis, two dimensionless group can be formed from the basic parameter set ,
, and
From there, a function can be determined to empirically describe the relationship between the two dimensionless quantities, called a universal function. Similarly, can be defined for the dimensionless group of mean temperature profile.
Mean wind and temperature profiles therefore satisfy the following relations,
where is the characteristic dynamical temperature, and are the universal functions of momentum and heat. The eddy diffusivity coefficients for momentum and heat fluxes are defined as follows,
and can be related with the turbulent Prandtl number ,
In reality, the universal functions need to be determined using experimental data when applying M–O similarity theory. Although the choice of universal functions is not unique, certain functional forms have been proposed and are widely accepted for fitting experimental data.
Universal functions of the Monin–Obukhov similarity theory
Several functional forms have been proposed to represent the universal functions of similarity theory. Because the Obukhov length is determined when , where is the Richardson number, the following condition must be satisfied by the universal function chosen,
A first order approximation of the universal function for momentum flux is,
where . However this is only applicable when . For conditions where , the relation is,
where is a coefficient to be determined from experimental data. This equation can be further approximated by when .
Based on the results of the 1968 Kansas experiment, the following universal functions are determined for horizontal mean flow and mean virtual potential temperature,
Other methods which determine the universal functions using the relation between and are also used.
For sublayers with significant roughness, e.g. vegetated surfaces or urban areas, the universal functions must be modified to include the effects of surface roughness.
Validations
A myriad of experimental efforts was devoted to the validation of the M–O similarity theory. Field observations and computer simulations have generally demonstrated that the M–O similarity theory is well satisfied.
In field measurements
The 1968 Kansas experiment found great consistency between measurements and predictions from similarity relations for the entire range of stability values. A flat wheat field in Kansas served as the experiment site, with winds measured by anemometers mounted at different heights on a 32 m tower. The temperature profile was also measured in a similar manner. Results from the Kansas field study indicated that the ratio of eddy diffusivities of heat and momentum was approximately 1.35 under neutral conditions. A similar experiment was conducted in a flat field in northwestern Minnesota in 1973. This experiment used both ground and balloon-based observations of the surface layer and further validated the theoretical predictions from similarity.
In large eddy simulations
In addition to field experiments, analysis of M–O similarity theory can be conducted using high-resolution large eddy simulations. The simulation indicates that the temperature field agrees well with M–O similarity. However, the velocity field shows significant anomalies from M–O similarity.
Limitations
M–O similarity theory, albeit successful for surface layers from experimental validations, is essentially a diagnostic empirical theory based upon local first order turbulence closure. Typically, 10%~20% of errors are associated with universal functions. When applied to vegetated areas or complex terrains, it can result in large discrepancies. Because universal functions are often determined under dry conditions, the applicability of M–O similarity theory under moist conditions was not well studied.
The basic parameter set of the M–O similarity theory includes buoyancy production . It is argued that with such a parameter set, the scaling is applied to the integral features of the flow, whereas an eddy specific similarity relationship prefers the usage of energy dissipation rate . This scheme is able to explain anomalies of M–O similarity theory, but involves non-locality to modeling and experiments.
See also
Obukhov length
Surface layer
Logarithmic wind profile
Mixing length model
References
Dimensional analysis
Microscale meteorology | Monin–Obukhov similarity theory | [
"Engineering"
] | 1,356 | [
"Dimensional analysis",
"Mechanical engineering"
] |
42,477,554 | https://en.wikipedia.org/wiki/Solid%20sorbents%20for%20carbon%20capture | Solid sorbents for carbon capture include a diverse range of porous, solid-phase materials, including mesoporous silicas, zeolites, and metal-organic frameworks. These have the potential to function as more efficient alternatives to amine gas treating processes for selectively removing CO2 from large, stationary sources including power stations. While the technology readiness level of solid adsorbents for carbon capture varies between the research and demonstration levels, solid adsorbents have been demonstrated to be commercially viable for life-support and cryogenic distillation applications. While solid adsorbents suitable for carbon capture and storage are an active area of research within materials science, significant technological and policy obstacles limit the availability of such technologies.
Overview
The combustion of fossil fuels generates over 13 gigatons of CO2 per year. Concern over the effects of CO2 with respect to climate change and ocean acidification led governments and industries to investigate the feasibility of technologies that capture the resultant CO2 from entering the carbon cycle. For new power plants, technologies such as pre-combustion and oxy-fuel combustion may simplify the gas separation process.
However, existing power plants require the post-combustion separation of CO2 from the flue gas with a scrubber. In such a system, fossil fuels are combusted with air and CO2 is selectively removed from a gas mixture also containing N2, H2O, O2 and trace sulphur, nitrogen and metal impurities. While exact separation conditions are fuel and technology dependent, in general CO2 is present at low concentrations (4-15% v/v) in gas mixtures near atmospheric pressure and at temperatures of approximately -60 °C. Sorbents for carbon capture are regenerated using temperature, pressure or vacuum, so that CO2 can be collected for sequestration or utilization and the sorbent can be reused.
The most significant impediment to carbon capture is the large amount of electricity required. Without policy or tax incentives, the production of electricity from such plants is not competitive with other energy sources. The largest operating cost for power plants with carbon capture is the reduction in the amount of electricity produced, because energy in the form of steam is diverted from making electricity in the turbines to regenerating the sorbent. Thus, minimizing the amount of energy required for sorbent regeneration is the primary goal behind much carbon capture research.
Metrics
Significant uncertainty exists around the total cost of post-combustion CO2 capture because full-scale demonstrations of the technology have yet to come online. Thus, individual performance metrics are generally relied upon when comparisons are made between different adsorbents.
Regeneration energy—Generally expressed in energy consumed per weight of CO2 captured (e.g. 3,000 kJ/kg). These values, if calculated directly from the latent and sensible heat components of regeneration, measure the total amount of energy required for regeneration.
Parasitic energy—Similar to regeneration energy, but measures how much usable energy is lost. Owing to the imperfect thermal efficiency of power plants, not all of the heat required to regenerate the sorbent would actually have produced electricity.
Adsorption capacity—The amount of CO2 adsorbed onto the material under the relevant adsorption conditions.
Working capacity—The amount of CO2 that can be expected to be captured by a specified amount of adsorbent during one adsorption–desorption cycle. This value is generally more relevant than the total adsorption capacity.
Selectivity—The calculated ability of an adsorbent to preferentially adsorb one gas over another gas. Multiple methods of reporting selectivity have been reported and in general values from one method are not comparable to values from another method. Similarly, values are highly correlated to temperature and pressure.
Comparison to aqueous amine absorbents
Aqueous amine solutions absorb CO2 via the reversible formation of ammonium carbamate, ammonium carbonate and ammonium bicarbonate. The formation of these species and their relative concentration in solution is dependent upon the specific amine or amines as well as the temperature and pressure of the gas mixture. At low temperatures, CO2 is preferentially absorbed by the amines and at high temperatures CO2 is desorbed. While liquid amine solutions have been used industrially to remove acid gases for nearly a century, amine scrubber technology is still under development at the scale required for carbon capture.
Advantages
Multiple advantages of solid sorbents have been reported. Unlike amines, solid sorbents can selectively adsorb CO2 without the formation of chemical bonds (physisorption). The significantly lower heat of adsorption for solids requires less energy for the CO2 to desorb from the material surface. Also, two primary or secondary amine molecules are generally required to absorb a single CO2 molecule in liquids. For solid surfaces, large capacities of CO2 can be adsorbed. For temperature swing adsorption processes, the lower heat capacity of solids has been reported to reduce the sensible energy required for sorbent regeneration. Many environmental concerns over liquid amines can be eliminated by the use of solid adsorbents.
Disadvantages
Manufacturing costs are expected to be significantly greater than the cost of simple amines. Because flue gas contains trace impurities that degrade sorbents, solid sorbents may prove to be prohibitively expensive. Significant engineering challenges must be overcome. Sensible energy required for sorbent regeneration cannot be effectively recovered if solids are used, offsetting their significant heat capacity savings. Additionally, heat transfer through a solid bed is slow and inefficient, making it difficult and expensive to cool the sorbent during adsorption and heat it during desorption. Lastly, many promising solid adsorbents have been measured only under ideal conditions, which ignores the potentially significant effects H2O can have on working capacity and regeneration energy.
Physical adsorbents
Carbon dioxide adsorbs in appreciable quantities onto many porous materials through van der Waals interactions. Compared to N2, CO2 adsorbs more strongly because the molecule is more polarizabable and possesses a larger quadrupole moment. However, stronger adsorptives including H2O often interfere with the physical adsorption mechanism. Thus, discovering porous materials that can selectively bind CO2 under flue gas conditions using only a physical adsorption mechanism is an active research area.
Zeolites
Zeolites, a class of porous aluminosilicate solids, are currently used in a wide variety of industrial and commercial applications including CO2 separation. The capacities and selectivities of many zeolites are among the highest for adsorbents that rely upon physisorption. For example, zeolite Ca-A (5A) has been reported to display both a high capacity and selectivity for CO2 over N2 under conditions relevant for carbon capture from coal flue gas, although it has not been tested in the presence of H2O. Industrially, CO2 and H2O can be co-adsorbed on a zeolite, but high temperatures and a dry gas stream are required to regenerate the sorbent.
Metal-organic frameworks
Metal-organic frameworks (MOFs) are promising adsorbents. Sorbents displaying a diverse set of properties have been reported. MOFs with extremely large surface areas are generally not among the best for CO2 capture compared to materials with at least one adsorption site that can polarize CO2. For example, MOFs with open metal coordination sites function as Lewis acids and strongly polarize CO2. Owing to CO2's greater polarizability and quadrupole moment, CO2 is preferentially adsorbed over many flue gas components such as N2. However, flue gas contaminants such as H2O often interfere. MOFs with specific pore sizes, tuned specifically to preferentially adsorb CO2 have been reported.
Chemical adsorbents
Amine impregnated solids
Frequently, porous adsorbents with large surface areas, but only weak adsorption sites, lack sufficient capacity for CO2 under realistic conditions. To increase low pressure CO2 adsorption capacity, adding amine functional groups to highly porous materials has been reported to result in new adsorbents with higher capacities. This strategy has been analyzed for polymers, silicas, activated carbons and metal-organic frameworks. Amine impregnated solids utilize the well-established acid-base chemistry of CO2 with amines, but dilute the amines by containing them within the pores of solids rather than as H2O solutions. Amine impregnated solids are reported to maintain their adsorption capacity and selectivity under humid test conditions better than alternatives. For example, a 2015 study of 15 solid adsorbent candidates for CO2 capture found that under multicomponent equilibrium adsorption conditions simulating humid flue gas, only adsorbents functionalized with alkylamines retained a significant capacity for CO2.
Notable adsorbents
References
Carbon capture and storage | Solid sorbents for carbon capture | [
"Engineering"
] | 1,903 | [
"Geoengineering",
"Carbon capture and storage"
] |
42,478,011 | https://en.wikipedia.org/wiki/Koszul%20duality | In mathematics, Koszul duality, named after the French mathematician Jean-Louis Koszul, is any of various kinds of dualities found in representation theory of Lie algebras, abstract algebras (semisimple algebra) and topology (e.g., equivariant cohomology). The prototypical example of Koszul duality was introduced by Joseph Bernstein, Israel Gelfand, and Sergei Gelfand,. It establishes a duality between the derived category of a symmetric algebra and that of an exterior algebra, as well as the BGG correspondence, which links the stable category of finite-dimensional graded modules over an exterior algebra to the bounded derived category of coherent sheaves on projective space. The importance of the notion rests on the suspicion that Koszul duality seems quite ubiquitous in nature.
Koszul duality for graded modules over Koszul algebras
The simplest, and in a sense prototypical case of Koszul duality arises as follows: for a 1-dimensional vector space V over a field k, with dual vector space , the exterior algebra of V has two non-trivial components, namely
This exterior algebra and the symmetric algebra of , , serve to build a two-step chain complex
whose differential is induced by natural evaluation map
Choosing a basis of V, can be identified with the polynomial ring in one variable, , and the previous chain complex becomes isomorphic to the complex
whose differential is multiplication by t. This computation shows that the cohomology of the above complex is 0 at the left hand term, and is k at the right hand term. In other words, k (regarded as a chain complex concentrated in a single degree) is quasi-isomorphic to the above complex, which provides a close link between the exterior algebra of V and the symmetric algebra of its dual.
Koszul dual of a Koszul algebra
Koszul duality, as treated by Alexander Beilinson, Victor Ginzburg, and Wolfgang Soergel can be formulated using the notion of Koszul algebra. An example of such a Koszul algebra A is the symmetric algebra on a finite-dimensional vector space. More generally, any Koszul algebra can be shown to be a quadratic algebra, i.e., of the form
where is the tensor algebra on a finite-dimensional vector space, and is a submodule of . The Koszul dual then coincides with the quadratic dual
where is the (k-linear) dual and consists of those elements on which the elements of R (i.e., the relations in A) vanish. The Koszul dual of is given by , the exterior algebra on the dual of V. In general, the dual of a Koszul algebra is again a Koszul algebra. Its opposite ring is given by the graded ring of self-extensions of the underlying field k, thought of as an A-module:
Koszul duality
If an algebra is Koszul, there is an equivalence between certain subcategories of the derived categories of graded - and -modules. These subcategories are defined by certain boundedness conditions on the grading vs. the cohomological degree of a complex.
Variants
As an alternative to passing to certain subcategories of the derived categories of and to obtain equivalences, it is possible instead to obtain equivalences between certain quotients of the homotopy categories. Usually these quotients are larger than the derived category, as they are obtained by factoring out some subcategory of the category of acyclic complexes, but they have the advantage that every complex of modules determines some element of the category, without needing to impose boundedness conditions. A different reformulation gives an equivalence between the derived category of and the 'coderived' category of the coalgebra .
An extension of Koszul duality to D-modules states a similar equivalence of derived categories between dg-modules over the dg-algebra of Kähler differentials on a smooth algebraic variety X and the -modules.
Koszul duality for operads
An extension of the above concept of Koszul duality was formulated by Ginzburg and Kapranov who introduced the notion of a quadratic operad and defined the quadratic dual of such an operad. Very roughly, an operad is an algebraic structure consisting of an object of n-ary operations for all n. An algebra over an operad is an object on which these n-ary operations act. For example, there is an operad called the associative operad whose algebras are associative algebras, i.e., depending on the precise context, non-commutative rings (or, depending on the context, non-commutative graded rings, differential graded rings). Algebras over the so-called commutative operad are commutative algebras, i.e., commutative (possibly graded, differential graded) rings. Yet another example is the Lie operad whose algebras are Lie algebras. The quadratic duality mentioned above is such that the associative operad is self-dual, while the commutative and the Lie operad correspond to each other under this duality.
Koszul duality for operads states an equivalence between algebras over dual operads. The special case of associative algebras gives back the functor mentioned above.
See also
Zinbiel algebra
Notes
References
Priddy, Stewart B. Koszul resolutions. Transactions of the American Mathematical Society 152 (1970), 39–60.
External links
http://www.math.harvard.edu/~lurie/282ynotes/LectureXXIII-Koszul.pdf
http://people.mpim-bonn.mpg.de/geordie/Soergel.pdf
http://arxiv.org/pdf/1109.6117v1.pdf
Algebras
Duality theories | Koszul duality | [
"Mathematics"
] | 1,245 | [
"Mathematical structures",
"Algebras",
"Algebraic structures",
"Duality theories",
"Geometry",
"Category theory"
] |
42,478,101 | https://en.wikipedia.org/wiki/Axiality%20%28geometry%29 | In the geometry of the Euclidean plane, axiality is a measure of how much axial symmetry a shape has. It is defined as the ratio of areas of the largest axially symmetric subset of the shape to the whole shape. Equivalently it is the largest fraction of the area of the shape that can be covered by a mirror reflection of the shape (with any orientation).
A shape that is itself axially symmetric, such as an isosceles triangle, will have an axiality of exactly one, whereas an asymmetric shape, such as a scalene triangle, will have axiality less than one.
Upper and lower bounds
showed that every convex set has axiality at least 2/3. This result improved a previous lower bound of 5/8 by . The best upper bound known is given by a particular convex quadrilateral, found through a computer search, whose axiality is less than 0.816.
For triangles and for centrally symmetric convex bodies, the axiality is always somewhat higher: every triangle, and every centrally symmetric convex body, has axiality at least . In the set of obtuse triangles whose vertices have -coordinates , , and , the axiality approaches in the limit as the -coordinates approach zero, showing that the lower bound is as large as possible. It is also possible to construct a sequence of centrally symmetric parallelograms whose axiality has the same limit, again showing that the lower bound is tight.
Algorithms
The axiality of a given convex shape can be approximated arbitrarily closely in sublinear time, given access to the shape by oracles for finding an extreme point in a given direction and for finding the intersection of the shape with a line.
consider the problem of computing the axiality exactly, for both convex and non-convex polygons. The set of all possible reflection symmetry lines in the plane is (by projective duality) a two-dimensional space, which they partition into cells within which the pattern of crossings of the polygon with its reflection is fixed, causing the axiality to vary smoothly within each cell. They thus reduce the problem to a numerical computation within each cell, which they do not solve explicitly. The partition of the plane into cells has cells in the general case, and cells for convex polygons; it can be constructed in an amount of time which is larger than these bounds by a logarithmic factor. Barequet and Rogol claim that in practice the area maximization problem within a single cell can be solved in time, giving (non-rigorous) overall time bounds of for the convex case and for the general case.
Related concepts
lists 11 different measures of axial symmetry, of which the one described here is number three. He requires each such measure to be invariant under similarity transformations of the given shape, to take the value one for symmetric shapes, and to take a value between zero and one for other shapes. Other symmetry measures with these properties include the ratio of the area of the shape to its smallest enclosing symmetric superset, and the analogous ratios of perimeters.
, as well as studying axiality, studies a restricted version of axiality in which the goal is to find a halfspace whose intersection with a convex shape has large area lies entirely within the reflection of the shape across the halfspace boundary. He shows that such an intersection can always be found to have area at least 1/8 that of the whole shape.
In the study of computer vision, proposed to measure the symmetry of a digital image (viewed as a function from points in the plane to grayscale intensity values in the interval ) by finding a reflection that maximizes the area integral
When is the indicator function of a given shape, this is the same as the axiality.
References
Symmetry
Euclidean plane geometry | Axiality (geometry) | [
"Physics",
"Mathematics"
] | 768 | [
"Planes (geometry)",
"Euclidean plane geometry",
"Geometry",
"Symmetry"
] |
42,478,623 | https://en.wikipedia.org/wiki/Sparse%20matrix%E2%80%93vector%20multiplication | Sparse matrix–vector multiplication (SpMV) of the form is a widely used computational kernel existing in many scientific applications. The input matrix is sparse. The input vector and the output vector are dense. In the case of a repeated operation involving the same input matrix but possibly changing numerical values of its elements, can be preprocessed to reduce both the parallel and sequential run time of the SpMV kernel.
See also
Matrix–vector multiplication
General-purpose computing on graphics processing units#Kernels
References
Sparse matrices | Sparse matrix–vector multiplication | [
"Mathematics"
] | 106 | [
"Mathematical objects",
"Combinatorics",
"Matrices (mathematics)",
"Sparse matrices",
"Matrix stubs"
] |
46,219,227 | https://en.wikipedia.org/wiki/Evolution%20of%20metal%20ions%20in%20biological%20systems | Evolution of metal ions in biological systems refers to the incorporation of metallic ions into living organisms and how it has changed over time. Metal ions have been associated with biological systems for billions of years, but only in the last century have scientists began to truly appreciate the scale of their influence. Major (iron, manganese, magnesium and zinc) and minor (copper, cobalt, nickel, molybdenum, tungsten) metal ions have become aligned with living organisms through the interplay of biogeochemical weathering and metabolic pathways involving the products of that weathering. The associated complexes have evolved over time.
Natural development of chemicals and elements challenged organisms to adapt or die. Current organisms require redox reactions to induce metabolism and other life processes. Metals have a tendency to lose electrons and are important for redox reactions. Metals have become so central to cellular function that the collection of metal-binding proteins (referred to as the metallomes) accounts for over 30% of all proteins in the cell. Metals are known to be involved in over 40% of enzymatic reactions, and metal-binding proteins carry out at least one step in almost all biological pathways.Metals are also toxic so a balance must be acquired to regulate where the metals are in an organism as well as in what quantities. Many organisms have flexible systems in which they can exchange one metal for another if one is scarce. Metals in this discussion are naturally occurring elements that have a tendency to undergo oxidation. Vanadium, molybdenum, cobalt, copper, chromium, iron, manganese, nickel, and zinc are deemed essential because without them biological function is impaired.
Origins
The Earth began as an iron aquatic world with low oxygen. The Great Oxygenation Event occurred approximately 2.4 Ga (billion years ago) as cyanobacteria and photosynthetic life induced the presence of dioxygen in the planet's atmosphere. Iron became insoluble (as did other metals) and scarce while other metals became soluble. Sulfur was a very important element during this time. Once oxygen was released into the environment, sulfates made metals more soluble and released those metals into the environment; especially into the water. Incorporation of metals perhaps combatted oxidative stress.
The central chemistry of all these cells has to be reductive in order that the synthesis of the required chemicals, especially biopolymers, is possible. The different anaerobic, autocatalysed, reductive, metabolic pathways seen in the earliest known cells developed in separate energised vesicles, protocells, where they were produced cooperatively with certain bases of the nucleic acids.
Hypotheses proposed for how elements became essential is their relative quantity in the environment as life formed. This has produced research on the origin of life; for instance, Orgel and Crick hypothesized that life was extraterrestrial due to the alleged low abundance of molybdenum on early Earth (it is now suspected that there were larger quantities than previously thought). Another example is life forming around thermal vents based on the availability of zinc and sulfur. In conjunction with this theory is that life evolved as chemoautotrophs. Therefore, life occurred around metals and not in response to their presence. Some evidence for this theory is that inorganic matter has self-contained attributes that life adopted as shown by life's compartmentalization. Other evidence includes the ready binding of metals by artificial proteins without evolutionary history.
Importance of metal ions in evolution
Catalysis
Redox catalysts
The prebiotic chemistry of life had to be reductive in order to obtain, e.g. Carbon monoxide (CO) and Hydrogen cyanide (HCN) from existing CO2 and N2 in the atmosphere. CO and HCN were precursor molecules of the essential biomolecules, proteins, lipids, nucleotides and sugars. However, atmospheric oxygen levels increased considerably, and it was then necessary for cells to have control over the reduction and oxidation of such small molecules in order to build and break down cells when necessary, without the inevitable oxidation (breaking down) of everything. Transition metal ions, due to their multiple oxidation states, were the only elements capable of controlling the oxidation states of such molecules, and thus were selected for.
Condensation and hydrolysis
O-donors such as HPO were abundant in the prebiotic atmosphere. Metal ion binding to such O-donors was required to build the biological polymers, since the bond is generally weak, it can catalyze the required reaction and dissociate after (i.e. Mg2+ in DNA synthesis).
Abundance of metals in seawater
Prebiotic (anaerobic) conditions
Around 4 Ga, the acidic seawater contained high amounts of H2S and thus created a reducing environment with a potential of around −0.2 V. So any element that had a large negative value with respect to the reduction potential of the environment was available in its free ionic form and can subsequently be incorporated into cells, i.e. Mg2+ has a reduction potential of −2.372 V and was available in its ionic form at that time.
Aerobic conditions
Around 2 Ga, an increase in atmospheric oxygen levels took place, causing an oxidation of H2S in the surroundings and an increase in the pH of the sea water. The resulting environment had become more oxidizing and thus allowed the later incorporation of the heavier metals such as copper and zinc.
Irving–Williams series
Another factor affecting the availability of metal ions was their solubilities with H2S. Hydrogen sulfide was abundant in the early sea giving rise to H2S in the prebiotic acidic conditions and HS− in the neutral (pH = 7.0) conditions. In the series of metal sulfides, insolubility increases at neutral pH following the Irving–Williams series:
Mn(II) < Fe(II) < Co(II) ≤ Ni(II) < Cu(II) > Zn(II)
So in high amounts of H2S, which was the prebiotic condition, only Fe was most prominently available in its ionic form due to its low insolubility with sulfides. The increasing oxidation of H2S into SO leads to the later release of Co+2, Ni+2, Cu+2, and Zn+2 since all of their sulfates are soluble.
Metal ions
Magnesium
Magnesium is the eighth most abundant element on earth. It is the fourth most abundant element in vertebrates and the most abundant divalent cation within cells. The most available form of magnesium (Mg2+) for living organisms can be found in the hydrosphere. The concentration of Mg2+ in seawater is around 55 mM. Mg2+ is readily available to cells during early evolution due to its high solubility in water. Other transition metals like calcium precipitate from aqueous solutions at much lower concentrations than the corresponding Mg2+ salts.
Since magnesium was readily available in early evolution, it can be found in every cell type living organism. Magnesium in anaerobic prokaryotes can be found in MgATP. Magnesium also has many functions in prokaryotes such as glycolysis, all kinases, NTP reaction, signalling, DNA/RNA structures and light capture. In aerobic eukaryotes, magnesium can be found in cytoplasm and chloroplasts. The reactions in these cell compartments are glycolysis, photophosphorylation and carbon assimilation.
ATP, the main source of energy in almost all living organisms, must bind with metal ions such as Mg2+ or Ca2+ to function. Examination of cells with limited magnesium supply has shown that a lack of magnesium can cause a decrease in ATP. Magnesium in ATP hydrolysis acts as a co-factor to stabilize the high negative charge transition state. MgATP can be found in both prokaryotes and eukaryotes cells. However, most of the ATP in cells is MgATP. Following the Irving–Williams series, magnesium has a higher binding constant than the Ca2+. Therefore, the dominant ATP in living organisms is MgATP. A greater binding constant also gives magnesium the advantage as a better catalyst over other competing transition metals.
Manganese
Evidence suggests that manganese (Mn) was first incorporated into biological systems roughly 3.2–2.8 billion years ago, during the Archean Period. Together with calcium, it formed the manganese-calcium oxide complex (determined by X-ray diffraction) which consisted of a manganese cluster, essentially an inorganic cubane (cubical) structure. The incorporation of a manganese center in photosystem II was highly significant, as it allowed for photosynthetic oxygen evolution of plants. The oxygen-evolving complex (OEC) is a critical component of photosystem II contained in the thylakoid membranes of chloroplasts; it is responsible for terminal photooxidation of water during light reactions.
The incorporation of Mn in proteins allowed the complexes the ability to reduce reactive oxygen species in Mn-superoxide dismutase (MnSOD) and catalase, in electron transfer-dependent catalysis (for instance in certain class I ribonucleotide reductases) and in the oxidation of water by photosystem II (PSII), where the production of thiobarbituric acid-reactive substances is decreased. This is due to manganese's ability to reduce superoxide anion and hydroxyl radicals as well as its chain-breaking capacity.
Iron
Iron (Fe) is the most abundant element in the Earth and the fourth most abundant element in the crust, approximately 5 percent by mass. Due to the abundance of iron and its role in biological systems, the transition and mineralogical stages of iron have played a key role in Earth surface systems. It played a larger role in the geological past in marine geochemistry, as evidenced by the deposits of Precambrian iron-rich sediments. The redox transformation of Fe(II) to Fe(III), or vice versa, is vital to a number of biological and element cycling processes. The reduction of Fe(III) is seen to oxidize sulfur (from HS to SO), which is a central process in marine sediments. Many of the first metalloproteins consisted of iron-sulphur complexes formed during photosynthesis. Iron is the main redox metal in biological systems. In proteins, it is found in a variety of sites and cofactors, including, for instance, haem groups, Fe–O–Fe sites, and iron–sulfur clusters.
The prevalence of iron is apparently due to the large availability of Fe(II) in the initial evolution of living organisms, before the rise of photosynthesis and an increase in atmospheric oxygen levels which resulted in the precipitation of iron in the environment as Fe(OH). It has flexible redox properties because such properties are sensitive to ligand coordination, including geometry. Iron can be also used in enzymes due to its Lewis acid properties, for example in nitrile hydratase. Iron is frequently found in mononuclear sites in the reduced Fe(II) form, and functions in dioxygen activation; this function is used as a major mechanism adopted by living organisms to avoid the kinetic barrier hindering the transformation of organic compounds by O. Iron can be taken up selectively as ferredoxins, Fe-O-Fe (hemerythrin and ribonucleotide reductase), Fe (many oxidases), apart from iron porphyrin. Variation in the related proteins with any one of these chemical forms of iron has produced a wide range of enzymes. All of these arrangements are modified to function both in the sense of reactivity and the positioning of the protein in the cell. Iron can have various redox and spin states, and it can be held in many stereochemistries.
Nickel and cobalt
Around 4–3 Ga, anaerobic prokaryotes began developing metal and organic cofactors for light absorption. They ultimately ended up making chlorophyll from Mg(II), as is found in cyanobacteria and plants, leading to modern photosynthesis. However, chlorophyll synthesis requires numerous steps. The process starts with uroporphyrin, a primitive precursor to the porphyrin ring which may be biotic or abiotic in origin, which is then modified in cells differently to make Mg, Fe, nickel (Ni), and cobalt (Co) complexes. The centers of these rings are not selective, thus allowing the variety of metal ions to be incorporated. Mg porphyrin gives rise to chlorophyll, Fe porphyrin to heme proteins, Ni porphyrin yields factor F-430, and Co porphyrin Coenzyme B12.
Copper
Before the Great Oxygenation Event, copper was not readily available for living organisms. Most early copper was Cu+ and Cu. This oxidation state of copper is not very soluble in water. One billion years ago, after the great oxidation event the oxygen pressure rose sufficiently to oxidise Cu+ to Cu2+, increasing its solubility in water. As a result, the copper became much more available for living organisms.
Most copper-containing proteins and enzymes can be found in eukaryotes. Only a handful of prokaryotes such as aerobic bacteria and cyanobacteria contain copper enzymes or proteins. Copper can be found in both prokaryotes and eukaryotes superoxide dismutase (SOD) enzyme. There are three distinct types of SOD, containing Mn, Fe and Cu respectively. Mn-SOD and Fe-SOD are found in most prokaryotes and mitochondria of the eukaryotic cell. Cu-SOD can be found in the cytoplasmic fraction of the eukaryotic cells. The three elements, copper, iron and manganese, can all catalyze superoxide to ordinary molecular oxygen or hydrogen peroxide. However, Cu-SOD is more efficient than Fe-SOD and Mn-SOD. Most prokaryotes only utilize Fe-SOD or Mn-SOD due to the lack of copper in the environment. Some organisms did not develop Cu-SOD due to the lack of a gene pool for the Cu-SOD adoption.
Zinc
Zinc (Zn) was incorporated into living cells in two waves. Four to three Ga, anaerobic prokaryotes arose, and the atmosphere was full of H2S and highly reductive. Thus most zinc was in the form of insoluble ZnS. However, because seawater at the time was slightly acidic, some Zn(II) was available in its ionic form and became part of early anaerobic prokaryotes' external proteases, external nucleases, internal synthetases and dehydrogenases.
During the second wave, once the Great Oxygenation Event occurred, more Zn(II) ions were available in the seawater. This allowed its incorporation in the single-cell eukaryotes as they arose at this time. It is believed that the later addition of ions such as zinc and copper allowed them to displace iron and manganese from the enzyme superoxide dismutase (SOD). Fe and Mn complexes dissociate readily (Irving–Williams series) while Zn and Cu do not. This is why eukaryotic SOD contains Cu or Zn and its prokaryotic counterpart contains Fe or Mn.
Zn (II) doesn't pose an oxidation threat to the cytoplasm. This allowed it to become a major cytoplasmic element in the eukaryotes. It became associated with a new group of transcription proteins, zinc fingers. This could only have occurred due to the long life of eukaryotes, which allowed time for zinc to exchange and hence become an internal messenger coordinating the action of other transcription factors during growth.
Molybdenum
Molybdenum (Mo) is the most abundant transition element in solution in the sea (mostly as dianionic molybdate ion) and in living organisms, its abundance in the Earth's crust is quite low. Therefore, the use of Mo by living organisms seems surprising at first glance. Archaea, bacteria, fungi, plants, and animals, including humans, require molybdenum. It is also found in over 50 different enzymes. Its hydrolysis to water-soluble oxo-anionic species makes Mo readily accessible. Mo is found in the active sites of metalloenzymes that perform key transformations in the metabolism of carbon, nitrogen, arsenic, selenium, sulfur, and chlorine compounds.
The mononuclear Mo enzymes are widely distributed in the biosphere; they catalyze many significant reactions in the metabolism of nitrogen and sulfur-containing compounds as well as various carbonyl compounds (e.g., aldehydes, CO, and CO). Nitrate reductases enzymes are important for the nitrogen cycle. They belong to a class of enzymes with a mononuclear Mo center and they catalyze the metabolism reaction of C, N, S, etc., in bacteria, plants, animals, and humans. Due to the oxidation of sulfides, The first considerable development was that of aerobic bacteria which could now utilize Mo. As oxygen began to accumulate in the atmosphere and oceans, the reaction of MoS to MoO also increased. This reaction made the highly soluble molybdate ion available for incorporation into critical metalloenzymes, and may have thus allowed life to thrive. It allowed organisms to occupy new ecological niches. Mo plays an important role in the reduction of dinitrogen to ammonia, which occurs in one type of nitrogenases. These enzymes are used by bacteria that usually live in a symbiotic relationship with plants; their role is nitrogen fixation, which is vital for sustaining life on earth. Mo enzymes also play important roles in sulfur metabolism of organisms ranging from bacteria to humans.
Tungsten
Tungsten is one of the oldest metal ions to be incorporated in biological systems, preceding the Great Oxygenation Event. Before the abundance of oxygen in Earth's atmosphere, oceans teemed with sulfur and tungsten, while molybdenum, a metal that is highly similar chemically, was inaccessible in solid form. The abundance of tungsten and lack of free molybdenum likely explains why early marine organisms incorporated the former instead of the latter. However, as cyanobacteria began to fill the atmosphere with oxygen, molybdenum became available (molybdenum becomes soluble when exposed to oxygen) and molybdenum began to replace tungsten in the majority of metabolic processes, which is seen today, as tungsten is only present in the biological complexes of prokaryotes (methanogens, gram-positive bacteria, gram-negative aerobes and anaerobes), and is only obligated in hyperthermophilic archaea such as P. furiosus.
Although research into the specific enzyme complexes in which tungsten is incorporated is relatively recent (1970s), natural tungstoenzymes are abundantly found in a large number of prokaryotic microorganisms. These include formate dehydrogenase, formyl methanufuran dehydrogenase, acetylene hydratase, and a class of phylogenetically related oxidoreductases that catalyze the reversible oxidation of aldehydes. The first crystal structure of a tungsten- or pterin-containing enzyme, that of aldehyde ferredoxin oxidoreductase from P. furiosus, has revealed a catalytic site with one W atom coordinated to two pterin molecules which are themselves bridged by a magnesium ion.
References
Evolutionary biology
Bioinorganic chemistry
Metals | Evolution of metal ions in biological systems | [
"Chemistry",
"Biology"
] | 4,150 | [
"Biochemistry",
"Metals",
"Evolutionary biology",
"Bioinorganic chemistry"
] |
46,222,172 | https://en.wikipedia.org/wiki/Vainshtein%20radius | Inside the Vainshtein radius
with Planck length and Planck mass
the gravitational field around a body of mass is the same in a theory where the graviton mass is zero and where it's very small because the helicity 0 degree of freedom becomes effective on distance scales .
See also
References
Quantum gravity
Radii | Vainshtein radius | [
"Physics"
] | 66 | [
"Quantum gravity",
"Unsolved problems in physics",
"Physics beyond the Standard Model"
] |
46,224,462 | https://en.wikipedia.org/wiki/Velocity%20interferometer%20system%20for%20any%20reflector | Velocity interferometer system for any reflector (VISAR) is a time-resolved velocity measurement system that uses laser interferometry to measure the surface velocity of solids moving at high speeds. For solids experiencing high velocity impact or explosive conditions, VISAR plots the free-surface velocity against time to show the shock wave profile of a material (See Figure). VISAR is a useful tool in determining the pressure-density relationship of a material known as the Rankine-Hugoniot conditions or simply the "Hugoniot".
In recent years another time-resolved velocity measurement tool called laser Doppler velocimetry has achieved popularity in the shock physics community as an adjunct or replacement for VISAR. This device is essentially a displacement interferometer of the normal Michelson variety. As such it requires extremely fast data acquisition devices (digital oscilloscopes with bandwidths of 10 GHz or higher) and is limited in the range of velocities it can cover. As the surface moves, the reflected light interferes with itself and sinusoidal 'fringes' in light intensity are produced and recorded. A cycle of light intensity or fringe count indicates a displacement of the surface corresponding to one wavelength of the light. The rate at which these fringes occur is thus proportional to the velocity of the surface. To derive a velocity history the fringe (displacement) data must be differentiated with respect to time, usually by Fourier analysis. This differentiation or FA step inevitably reduces the time resolution and accuracy of the velocity history.
The VISAR on the other hand is configured to 'optically differentiate' so that the light intensity variation due to interference varies sinusoidally with the velocity of the surface not the displacement. Also called a 'Delay-Leg Interferometer', it remains the best and most accurate method for recording the velocity history of fast moving surfaces.
The original VISARs were built at the National Laboratories and had free-space beams on optical tables with discrete optical components such as beam-splitting pellicles, mirrors, quarter wave delay plates, glass etalons, high voltage photo-multiplier tubes, argon ion lasers and so on.
Modern versions such as the Mark IV-3000 from Martin, Froeschner & Associates (mfaoptics.com) implement the same optical arrangement entirely in single-mode optical fibre with all solid state telecomm components such as InGaAs photodiodes, Er doped fibre amplifiers (EDFAs) and extremely high purity (<2 kHz linewidth) lasers. Velocity resolution down to 0.01 m/s has been demonstrated with time resolution <1ns.
References
Further reading
Foundations of VISAR
Measurement
Interferometry
Doppler effects
Laser applications | Velocity interferometer system for any reflector | [
"Physics",
"Mathematics"
] | 557 | [
"Physical phenomena",
"Physical quantities",
"Quantity",
"Astrophysics",
"Size",
"Measurement",
"Doppler effects"
] |
47,066,174 | https://en.wikipedia.org/wiki/The%20Singular%20Universe%20and%20the%20Reality%20of%20Time | The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy is a book about cosmology, philosophy of time, metaphysics and scientific naturalism by the American theoretical physicist Lee Smolin and the Brazilian philosopher Roberto Mangabeira Unger. The authors argue that the current crisis in cosmology is a result of physicists making the wrong commitments to universalizing local experiments and to a block universe. They suggest instead that new research projects would be revealed if we took seriously the idea of one, and only one, universe as well as the reality of our experience of time. This new paradigm, they say, would also give rise to the revolutionary notion that the laws of nature might not be immutable. The book was initially published by Cambridge University Press on December 8, 2014.
Synopsis
The book discusses a number of philosophical and physical ideas on the true role of time in the Universe. The text is roughly divided into two halves, the first one written by Unger, and the second by Smolin, both developing the same themes in different ways, with Smolin being more focused on the physics.
Reviews
—The Guardian
—Peter Woit
See also
Philosophy of time
References
External links
2014 non-fiction books
American non-fiction books
Books by Lee Smolin
Books by Roberto Mangabeira Unger
Contemporary philosophical literature
Cosmology books
English-language non-fiction books
Philosophy of time
Popular physics books
Cambridge University Press books | The Singular Universe and the Reality of Time | [
"Physics"
] | 290 | [
"Spacetime",
"Philosophy of time",
"Physical quantities",
"Time"
] |
47,069,434 | https://en.wikipedia.org/wiki/Arun%20Kumar%20Basak | Arun Kumar Basak FInstP CPhys (born October 17, 1941) is a Bangladeshi physicist. He is Professor Emeritus in the Department of Physics, University of Rajshahi.
Early life and education
Basak was born in Radhanagor of Pabna town, Bengal Presidency, British India to parents Haripada Basak and Usha Rani Basak. Basak matriculated in 1957 securing First Division from R.M. Academy. He secured the second position in the merit list in the Intermediate Science Examination in 1959 from Govt. Edward College. He was placed in First Class with the first position in the B.Sc. (Hons) examination from Rajshahi College in 1961. In M.Sc. Examination (1963) from the University of Rajshahi, he obtained the first position in first class and was awarded an RU Gold medal.
Career
In December 1963, Basak joined the University of Rajshahi as a lecturer in the Department of Physics. In 1978, Basak was appointed as an associate professor by the University of Dhaka, but he preferred to stay in Rajshahi where he became associate professor in the later part of 1978.
He was awarded a merit scholarship for securing the highest marks in the Faculty of Science and got admission at Imperial College, London. Owing to the 1965 Indo-Pak war, he could not avail the opportunity. In 1972, he went to the University of Birmingham with a Commonwealth Scholarship. He worked with the tensor polarized deuteron and the polarized 3He beams, the latter being the only one of its kind in the world. He earned his Ph.D. degree in 1975.
Professional membership
Senior associate of the International Centre for Theoretical Physics at Trieste, Italy during 1987–96.
Elected as a fellow of the Bangladesh Academy of Sciences in 2001
Elected as a fellow of the Institute of Physics (London) in 2001.
Was a principal investigator from Bangladesh in a collaborative project, which funded by the US National Science Foundation.
The fellow of Bangladesh Physical Society from 1987 (life membership).
A member of American Physical Society during 2000-03 and from 2013(life membership).
Others
Was a post doctoral fellow in nuclear physics, the Ohio State University, United States during 1981–82.
An associate member of ICTP, Italy during 1988–1995.
Visiting scholar at Southern Illinois University, US in 1997.
Visiting professor at Kent State University, US.
Awards
Bangladesh Academy of Sciences Gold Medal in Physical Sciences (2003)
Star Lifetime Award on Physics (2016)
References
External links
Top Publications of A. K. Basak
1941 births
Living people
Bengali Hindus
Bangladeshi Hindus
Bangladeshi physicists
University of Rajshahi alumni
Academic staff of the University of Rajshahi
Alumni of the University of Birmingham
Bengali physicists
Nuclear physicists
Theoretical physicists
Fellows of Bangladesh Academy of Sciences
Physics educators
People from Pabna District
Rajshahi College alumni
Fellows of the Bangladesh Physical Society
Pabna Edward College alumni | Arun Kumar Basak | [
"Physics"
] | 594 | [
"Nuclear physicists",
"Theoretical physics",
"Theoretical physicists",
"Nuclear physics"
] |
39,710,393 | https://en.wikipedia.org/wiki/C16H16O8 | {{DISPLAYTITLE:C16H16O8}}
The molecular formula C16H16O8 (molar mass: 336.29 g/mol, exact mass = 336.084517 u) may refer to:
4-Caffeoyl-1,5-quinide
Dactylifric acid
Molecular formulas | C16H16O8 | [
"Physics",
"Chemistry"
] | 74 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
38,262,597 | https://en.wikipedia.org/wiki/SRM%20Engine%20Suite | The SRM Engine Suite is an engineering software tool used for simulating fuels, combustion and exhaust gas emissions in internal combustion engine (IC engine) applications. It is used worldwide by leading IC engine development organisations and fuel companies. The software is developed, maintained and supported by CMCL Innovations, Cambridge, U.K.
Applications
The software has been applied to simulate almost all engine applications and all transportation fuel combinations with many examples published in numerous leading peer-reviewed journals, a brief summary of these articles is presented here.
Spark ignition combustion mode: Sub-models to simulate Direct Injection Spark Ignition engines for regular flame propagation events, PM and NOx exhaust gas emissions. Further analysis of knocking and irregular combustion events are facilitated through the implementation of user-defined or the chemical kinetic fuel models included with the tool.
CIDI (diesel) combustion mode: Sub-models for direct injection, turbulence and chemical kinetic enable the simulation of diesel combustion and emission analysis. Typical user projects have included combustion, PM and NOx simulation over a load-speed map, virtual engine optimization, comparison with 3D-CFD and injection strategy optimization.
Low temperature combustion mode: Known as HCCI or premixed CIDI combustion (PCCI, PPCI), ignition and flame propagation in low temperature combustion mode is more sensitive to fuel chemistry effects. By accounting for user defined or by applying the default chemical kinetic fuel models, users do benefit from enhanced predictive performance. Typical projects include identifying the operating and misfire limits for multiple fuel types.
Advanced fuels: To date the model has been applied to conventional diesel, gasoline, blends of gasoline and diesel, bio-fuels, hydrogen, natural gas, and ethanol-blended gasoline fuel applications.
Exhaust gas emissions: Through the implementation of detailed chemical kinetic in both the gas and solid particulate phases, all conventional automotive and non-road exhaust gas emissions are simulated in detail.
The model
The software is based on the stochastic reactor model (SRM), which is stated in terms of a weighted stochastic particle ensemble. SRM is particular useful in the context of engine modelling
as the dynamics of the particle ensemble includes detailed chemical kinetics whilst accounting for inhomogeneity in composition and temperature space arising from on-going fuel injection, heat transfer and turbulence mixing events. Through this coupling, heat release profiles and in particular the associated exhaust gas emissions (Particulates, NOx, Carbon monoxide, Unburned hydrocarbon etc.) can be predicted more accurately than if using the more conventional approaches of standard homogenous and multi-zone reactor methods.
Coupling with third party software tools
The software can be coupled as a plug-in into 1D engine cycle software tools, are capable of simulating the combustion and emissions during closed volume period of the cycle (combustion, TDC and negative valve overlap).
An advanced Application programming interface enables for the model to be coupled with a user-defined codes such as 3D-CFD or control software.
See also
Chemical kinetics
Internal combustion engine
Computational fluid dynamics
Kinetics
References
External links
Engines
Combustion
Computational science
Fluid dynamics | SRM Engine Suite | [
"Physics",
"Chemistry",
"Mathematics",
"Technology",
"Engineering"
] | 624 | [
"Machines",
"Engines",
"Applied mathematics",
"Chemical engineering",
"Physical systems",
"Computational science",
"Combustion",
"Piping",
"Fluid dynamics"
] |
38,262,758 | https://en.wikipedia.org/wiki/Hes3%20signaling%20axis | The STAT3-Ser/Hes3 signaling axis is a specific type of intracellular signaling pathway that regulates several fundamental properties of cells.
Overview
Cells in tissues need to be able to sense and interpret changes in their environment. For example, cells must be able to detect when they are in physical contact with other cells in order to regulate their growth and avoid the generation of tumors (“carcinogenesis”). In order to do so, cells place receptor molecules on their surface, often with a section of the receptor exposed to the outside of the cell (extracellular environment), and a section inside the cell (intracellular environment). These molecules are exposed to the environment outside of the cell and, therefore, in position to sense it. They are called receptors because when these come into contact with particular molecules (termed ligands), then chemical changes are induced to the receptor. These changes typically involve alterations in the three-dimensional shape of the receptor. These 3D structure changes affect both the extracellular and intracellular parts (domains) of the receptor. As a result, interaction of a receptor with its specific ligand which is located outside of the cell causes changes to the receptor part which is inside the cell. A signal from the extracellular space, therefore, can affect the biochemical state inside the cell.
Following receptor activation by the ligand, several steps can sequentially ensue. For example, the 3D shape changes to the intracellular domain may render it recognizable to catalytic proteins (enzymes) that are located inside the cell and have physical access to it. These enzymes may then induce chemical changes to the intracellular domain of the activated receptor, including the addition of phosphate chemical groups to specific components of the receptor (phosphorylation), or the physical separation (cleavage) of the intracellular domain. Such modifications may enable the intracellular domain to act as an enzyme itself, meaning that it may now catalyze the modification of other proteins in the cell. Enzymes which catalyze phosphorylation modifications are termed kinases. These modified proteins may then also be activated and enabled to induce further modifications to other proteins, and so on. This sequence of catalytic modifications is termed a “signal transduction pathway” or “second messenger cascade”. It is a critical mechanism employed by cells to sense their environment and induce complex changes to their state. Such changes may include, as noted, chemical modifications to other molecules, as well as decisions concerning which genes are activated and which are not (transcriptional regulation).
There are many signal transduction pathways in a cell and each of these involves many different proteins. This provides many opportunities for different signal transduction pathways to intercept (cross-talk). As a result, a cell simultaneously processes and interprets many different signals, as would be expected since the extracellular environment contains many different ligands. Cross-talk also allows the cell to integrate these many signals as opposed to process them independently. For example, mutually opposing signals may be activated at the same time by different ligands, and the cell can interpret these signals as a whole.
Signal transduction pathways are widely studied in biology as they provide mechanistic understanding of how a cell operates and takes critical decisions (e.g. to multiply, move, die, activate genes etc.). These pathways also provide many drug targets and are of great relevance to drug discovery efforts.
Technical overview
The notch/STAT3-Ser/Hes3 signaling axis is a recently identified signal transduction branch of the notch signaling pathway, originally shown to regulate the number of neural stem cells in culture and in the living adult brain. Pharmacological activation of this pathway opposed the progression of neurodegenerative disease in rodent models. More recent efforts have implicated it in carcinogenesis and diabetes. The pathway can be activated by soluble ligands of the notch receptor which induce the sequential activation of intracellular kinases and the subsequent phosphorylation of STAT3 on the serine residue at amino acid position 727 (STAT3-Ser). This modification is followed by an increase in the levels of Hes3, a transcription factor belonging to the Hes/Hey family of genes (see HES1). Hes3 has been used as a biomarker to identify putative endogenous stem cells in tissues. The pathway is an example of non-canonical signaling as it represents a new branch of a previously established signaling pathway (notch). Several efforts are currently aimed at relating this pathway to other signaling pathways and to manipulate it in a therapeutic context.
Discovery
In canonical notch signaling, ligand proteins bind to the extracellular domain of the notch receptor and induce the cleavage and release of the intracellular domain into the cytoplasm. This subsequently interacts with other proteins, enters the nucleus, and regulates gene expression.
In 2006, a non-canonical branch of the notch signaling pathway was discovered. Using cultures of mouse neural stem cells, notch activation was shown to lead to the phosphorylation of several kinases (PI3K, Akt, mTOR) and subsequent phosphorylation of the serine residue of STAT3 in the absence of any detectable phosphorylation of the tyrosine residue of STAT3, a modification that is widely studied in the context of cancer biology. Following this event, Hes3 mRNA was elevated within 30 minutes. Subsequently, the consequences of this pathway were studied.
Activators
Various inputs into this pathway have been identified. Activators include ligands of a number of receptors. Because certain signal transduction pathways oppose the STAT3-Ser/Hes3 signaling axis, blockers (inhibitors) of these signal transduction pathways promote the STAT3-Ser/Hes3 signaling axis and, therefore, also act as activators:
A non-canonical branch of the notch signaling pathway (activated by soluble forms of the notch ligands Delta4 and Jagged1). This has been shown in vitro and in vivo.
Activation of the Tie2 receptor by the ligand Angiopoietin 2. This has been shown in vitro and in vivo.
Activation of the insulin receptor by insulin. This has been shown in vitro and in vivo.
Treatment with an inhibitor of the Janus kinase (JAK). This has been shown in vitro.
Treatment with an inhibitor of the p38 MAP kinase kinase. This has been shown in vitro.
Treatment with cholera toxin. This has been shown in vitro. This particular treatment may bypass the STAT3-Ser stage and act more specifically at the level of Hes3 because it has a powerful effect on inducing the nuclear translocation of Hes3.
Cells in which it operates
The effects of a particular signal transduction pathway can be very different among distinct cell types. For example, the same signal transduction pathway may promote the survival of one cell type but the maturation of another. This depends both on the nature of a cell but also on its particular state which may change over the course of its lifetime. Identifying cell types where a signal transduction pathway is operational is a first step to uncovering potentially new properties of this pathway.
The STAT3-Ser/Hes3 signaling axis has been shown to operate on various cell types. So far, research has mostly focused on stem cells and cancerous tissue and, more recently, in the function of the endocrine pancreas:
Fetal and adult mouse and rat neural stem cells.
Adult monkey (rhesus macaque) neural stem cells.
Human cancer stem cells from glioblastoma multiforme.
In a human prostate cancer cell line, STAT3-Ser was shown to promote tumorigenesis independently of STAT3-Tyr.
Chromaffin progenitor cells of the bovine adrenal medulla.
Mouse insulinoma cells (MIN6 cell line) and mouse pancreatic islet cells.
Mouse embryonic fibroblasts (MEF) during reprogramming to the induced pluripotent stem cell state.
Human embryonic stem cells
Mouse neural stem cells derived from induced pluripotent stem cells.
Biological consequences
An individual signal transduction pathway can regulate several proteins (e.g. kinases) as well as the activation of many genes. The consequences to the properties of the cell can be, therefore, very prominent. Identifying these properties (through theoretical predictions and experimentation) sheds light on the function of the pathway and provides possible new therapeutic targets.
Activation of the notch/STAT3-Ser/Hes3 signaling axis has significant consequences to several cell types; effects have been documented both in vitro and in vivo:
Cultured fetal and adult rodent neural stem cells: Pro-survival effects; increased yield; increased expression of sonic hedgehog protein.
In vivo adult rodent neural stem cells: Increase in cell number; increased expression of Sonic hedgehog (Shh) protein. Delta4 administration in the adult rodent brain has also been shown to augment the effect of basic fibroblast growth factor and epidermal growth factor in promoting the proliferation of neural precursor cells in the subventricular zone and hypothalamus following ischemic stroke.
Cultured adult monkey neural stem cells: Pro-survival effects; increased yield; increased expression of sonic hedgehog protein.
Cultured putative glioblastoma multiforme cancer stem cells: Pro-survival effects (Hes3 knockdown by RNA interference reduces cell number).
Cultured bovine chromaffin progenitor cells: Several activators of the signaling pathway increase cell yield.
Cultured mouse insulinoma cells (MIN6 cell line): These cells can be cultured efficiently under conditions that promote the operation of the signaling pathway; Hes3 RNA interference opposes growth and the release of insulin following standard protocols that evoke insulin release from these cells.
Mice that are engineered to lack the Hes3 gene exhibit increased sensitivity to treatments that damage endocrine pancreas cells.
Recent research implicates Hes3 in direct reprogramming of adult mouse cells to the neural stem cell state; a causative relation remains to be determined.
Hes3 and components of the Signaling Axis are regulated during critical stages of reprogramming (Mouse Embryonic Fibroblast - to - Embryonic Stem Cell reprogramming).
Mice genetically engineered to lack the Hes3 gene fail to upregulate the transcription factor Neurogenin3 during pancreatic regeneration (induced by streptozotocin treatment). This is indicative of a compromised regenerative response.
Role in the adult brain
As stated above, the STAT3-Ser/Hes3 signaling axis regulates the number of neural stem cells (as well as other cell types) in culture. This prompted experiments to determine if the same pathway can also regulate the number of naturally resident (endogenous) neural stem cells in the adult rodent brain. If so, this would generate a new experimental approach to study the effects of increasing the number of endogenous neural stem cells (eNSCs). For example, would this lead to the replacement of lost cells by newly generated cells from eNSCs? Or, could this lead to the rescue of damaged neurons in models of neurodegenerative disease, since eNSCs are known to produce factors that can protect injured neurons?
Various treatments that input into the STAT3-Ser/Hes3 signaling axis (Delta4, Angiopoietin 2, insulin, or a combined treatment consisting of all three factors and an inhibitor of JAK) induce the increase in numbers of endogenous neural stem cells as well as behavioral recovery in models of neurodegenerative disease. Several pieces of evidence suggest that in the adult brain, pharmacological activation of the STAT3-Ser/Hes3 signaling axis protects compromised neurons through increased neurotrophic support provided by activated neural stem cells / neural precursor cells, which can be identified by their expression of Hes3:
These treatments increase the number of Hes3+ cells by several-fold.
Hes3+ cells can be isolated and placed in culture where they exhibit stem cell properties.
In culture and in vivo, Hes3+ cells express Shh, which supports the survival of certain neurons [Hes3+ cells may also express other pro-survival factors, yet unidentified].
The distribution of Hes3+ cells in the adult brain is widespread and can be found in close physical proximity to different types of neurons.
Diverse treatments that converge to the STAT3-Ser/Hes3 signaling axis exert similar effects in the normal brain (increase in the number of Hes3+ cells) and in the compromised brain (increase in the number of Hes3+ cells, oppose neuronal death, and improve behavioral state).
Macrophage migration inhibitory factor stimulates this signaling pathway and promotes the survival of neural stem cells.
Mice genetically engineered to lack the Hes3 gene exhibit differences in the amount of myelin basic protein (a protein expressed on myelinating oligodendrocytes), relative to normal mice; Hes3-lacking mice also exhibit a different regulation of this protein after oligodendrocyte damage induced by the chemical cuprizone.
Implications to disease
The emerging understanding of the role of eNSCs in the adult mammalian brain suggested the relevance of these cells to disease. To address this issue, experiments were performed where the activation of eNSCs was induced in models of disease. This allowed the study of the consequences of activating eNSCs in the diseased brain. Several lines of evidence implicate the STAT3-Ser/Hes3 signaling axis in various diseases:
Activation of the signaling pathway by Delta4 in combination with basic fibroblast growth factor (bFGF) induces motor and sensory skill improvements in adult rat models of ischemic stroke (PMCAO model).
This signaling pathway may mediate pro-survival functions of macrophage migration inhibitory factor on neural stem cells.
Activation of the signaling pathway by Delta4, Angiopoietin 2, insulin, or a combination of the three and a JAK inhibitor induces motor skill improvements in adult rat models of Parkinson's disease (6-hydroxydopamine model).
RNA interference (“knockdown”) of Hes3 in cultures of cells with cancer stem cell properties from patients with glioblastoma multiforme reduces cell number.
Mice lacking Hes3 exhibit increased sensitivity to particular paradigms of pancreatic islet damage, suggesting roles in diabetes.
Tissue cytoarchitecture
In tissues, many different cell types interact with one another. In the brain, for example, neurons, astrocytes, and oligodendrocytes (specialized cells of the neural tissue, each with specific functions) interact with one another as well as with cells that comprise blood vessels. All these different cell types may interact with all others by the production of ligands that may activate receptors on the cell surface of other cell types. Understanding the way these different cell types interact with one another will allow to predict ways of activating eNSCs. For example, because eNSCs are found in close proximity with blood vessels, it has been hypothesized that signals (e.g., ligands) from cells comprising the blood vessel act on receptors found on the cell surface of eNSCs.
Endogenous neural stem cells are often in close physical proximity to blood vessels. Signals from blood vessels regulate their interaction with stem cells and contribute to the cytoarchitecture of the tissue. The STAT3-Ser/Hes3 signaling axis operating in Hes3+ cells is a convergence point for several of these signals (e.g. Delta4, Angiopoietin 2). Hes3, in turn, by regulating the expression of Shh and potentially other factors, can also exert an effect on blood vessels and other cells comprising their microenvironment.
References
External links
Innate Repair Laboratory
National Institutes of Health - Stem Cell Information
Volkswagen Stiftung Symposium, 2014: DiSCUSS - Cancer Stem Cells Meeting
Volkswagen Stiftung Symposium, 2011: DiSCUSS Meeting
Stem cells
Stem cell research
Regenerative biomedicine
Biologically based therapies
Molecular neuroscience
Signal transduction | Hes3 signaling axis | [
"Chemistry",
"Biology"
] | 3,339 | [
"Stem cell research",
"Signal transduction",
"Molecular neuroscience",
"Translational medicine",
"Tissue engineering",
"Molecular biology",
"Biochemistry",
"Neurochemistry"
] |
38,262,855 | https://en.wikipedia.org/wiki/Nanothermometry | Nanothermometry is a branch of physics and engineering exploring the use of non-invasive precise thermometers working at the nanoscale. These devices have high spatial resolution (below one micrometer), where conventional methods are ineffective.
Sensitivity of a nanothermometer
The sensitivity is a parameter that characterizes a thermometer giving information about the relative change on the output of the thermometer per degree of temperature change. Numerically, it can be computed using the calibration curve (temperature dependence of the thermometric parameter, Q)
As Sr have small values, usually it is expressed as a percentage, like 1.0%· K−1, meaning that a degree change in temperature will be measured in the thermometric parameter as a change of 1.0%. This quantity is telling to determine the appropriate detector to be used in order to measure the temperature from the thermometric parameter change.
Luminescent nanothermometers
The well-known limitations of contact thermometers to work at submicron scale lead to the development of non-contact thermometry techniques, such as, IR thermography, thermoreflectance, optical interferometry, Raman spectroscopy, and luminescence. Luminescence nanothermometry exploits the relationship between temperature and luminescence properties to achieve thermal sensing from the spatial and spectral analysis of the light generated from the object to be thermally imaged.
References
Nanotechnology | Nanothermometry | [
"Materials_science",
"Engineering"
] | 307 | [
"Nanotechnology",
"Materials science"
] |
38,266,312 | https://en.wikipedia.org/wiki/Truncated%20tetrapentagonal%20tiling | In geometry, the truncated tetrapentagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t0,1,2{4,5} or tr{4,5}.
Symmetry
There are four small index subgroup constructed from [5,4] by mirror removal and alternation. In these images fundamental domains are alternately colored black and white, and mirrors exist on the boundaries between colors.
A radical subgroup is constructed [5*,4], index 10, as [5+,4], (5*2) with gyration points removed, becoming orbifold (*22222), and its direct subgroup [5*,4]+, index 20, becomes orbifold (22222).
Related polyhedra and tiling
See also
Uniform tilings in hyperbolic plane
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Truncated tilings
Uniform tilings | Truncated tetrapentagonal tiling | [
"Physics"
] | 277 | [
"Truncated tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Uniform tilings",
"Symmetry"
] |
38,266,320 | https://en.wikipedia.org/wiki/Tetrapentagonal%20tiling | In geometry, the tetrapentagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t1{4,5} or r{4,5}.
Symmetry
A half symmetry [1+,4,5] = [5,5] construction exists, which can be seen as two colors of pentagons. This coloring can be called a rhombipentapentagonal tiling.
Dual tiling
The dual tiling is made of rhombic faces and has a face configuration V4.5.4.5:
Related polyhedra and tiling
See also
Binary tiling, an aperiodic tiling of the hyperbolic plane by pentagons
Uniform tilings in hyperbolic plane
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Isotoxal tilings
Uniform tilings | Tetrapentagonal tiling | [
"Physics"
] | 259 | [
"Isotoxal tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Uniform tilings",
"Symmetry"
] |
38,266,327 | https://en.wikipedia.org/wiki/Rhombitetrapentagonal%20tiling | In geometry, the rhombitetrapentagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t0,2{4,5}.
Dual tiling
The dual is called the deltoidal tetrapentagonal tiling with face configuration V.4.4.4.5.
Related polyhedra and tiling
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Uniform tilings in hyperbolic plane
List of regular polytopes
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Uniform tilings | Rhombitetrapentagonal tiling | [
"Physics"
] | 187 | [
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Uniform tilings",
"Symmetry"
] |
38,266,328 | https://en.wikipedia.org/wiki/Truncated%20order-4%20pentagonal%20tiling | In geometry, the truncated order-4 pentagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t0,1{5,4}.
Uniform colorings
A half symmetry [1+,4,5] = [5,5] coloring can be constructed with two colors of decagons. This coloring is called a truncated pentapentagonal tiling.
Symmetry
There is only one subgroup of [5,5], [5,5]+, removing all the mirrors. This symmetry can be doubled to 542 symmetry by adding a bisecting mirror.
Related polyhedra and tiling
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Uniform tilings in hyperbolic plane
List of regular polytopes
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Order-4 tilings
Pentagonal tilings
Truncated tilings
Uniform tilings | Truncated order-4 pentagonal tiling | [
"Physics"
] | 254 | [
"Truncated tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Uniform tilings",
"Symmetry"
] |
38,266,339 | https://en.wikipedia.org/wiki/Truncated%20order-5%20square%20tiling | In geometry, the truncated order-5 square tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t0,1{4,5}.
Related polyhedra and tiling
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
See also
Uniform tilings in hyperbolic plane
List of regular polytopes
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Order-5 tilings
Square tilings
Truncated tilings
Uniform tilings | Truncated order-5 square tiling | [
"Physics"
] | 164 | [
"Truncated tilings",
"Isogonal tilings",
"Tessellation",
"Hyperbolic tilings",
"Uniform tilings",
"Symmetry"
] |
38,268,733 | https://en.wikipedia.org/wiki/Quantum%20carpet | In quantum mechanics, a quantum carpet
is a regular art-like pattern drawn by the wave function evolution or the probability density in the space of the Cartesian product of the quantum particle position coordinate and time or in spacetime resembling carpet art. It is the result of self-interference of the wave function during its interaction with reflecting boundaries. For example, in the infinite potential well, after the spread of the initially localized Gaussian wave packet in the center of the well, various pieces of the wave function start to overlap and interfere with each other after reflection from the boundaries. The geometry of a quantum carpet is mainly determined by the quantum fractional revivals.
Quantum carpets demonstrate many principles of quantum mechanics, including wave-particle duality, quantum revivals, and decoherence. Thus, they illustrate certain aspects of theoretical physics.
In 1995, Michael Berry created the first quantum carpet, which described the momentum of an excited atom. Today, physicists use quantum carpets to demonstrate complex theoretical principles.
Quantum carpets that demonstrate theoretical principles
Wave-particle duality
Quantum carpets demonstrate wave-particle duality by showing interference within wave packets.
Wave particle duality is difficult to comprehend. However, quantum carpets provide an opportunity to visualize this property. Consider the graph of the probability distribution of an excited electron in a confined space (particle in a box), where brightness of color corresponds to momentum. Lines of dull color (ghost terms or canals) appear across the quantum carpet. In these canals, the momentum of the electron is very small. Destructive interference, when the trough of a wave overlaps with the crest of another wave, causes these ghost terms. In contrast, some areas of the graph display bright color. Constructive interference, when the crests of two waves overlap to form a larger wave, causes these bright colors. Thus, quantum carpets provide visual evidence of interference within electrons and other wave packets. Interference is a property of waves, not particles, so interference within these wave packets prove that they have properties of waves in addition to properties of particles. Therefore, quantum carpets display wave particle duality.
Quantum revivals
Quantum carpets demonstrate quantum revivals by showing the periodic expansions and contractions of wave packets.
When the momentum of a wave packet is graphed on a quantum carpet, it displays an intricate pattern. When the temporal evolution of this wave packet is graphed on quantum carpets, the wave packet expands, and the initial pattern is lost. However, after a certain period of time, the waveform contracts and returns to its original state, and the initial pattern is restored. This continues to occur with periodic regularity. Quantum revivals, the periodic expansion and contraction of wave packets, are responsible for the restoration of the pattern. Although quantum revivals are mathematically complex, they are simple and easy to visualize on quantum carpets, as patterns expanding and reforming. Thus, quantum carpets provide clear visual evidence of quantum revivals.
Decoherence
Quantum carpets demonstrate decoherence by showing a loss of coherence over time.
When the temporal evolution of an electron, photon, or atom is graphed on a quantum carpet, there is initially a distinct pattern. This distinct pattern shows coherence. That is to say, the wave can be split in two pieces and recombined to form a new wave. However, this pattern fades with time, and eventually, devolves into nothing. When the pattern fades, coherence is lost, and it is impossible to split the wave in two and recombine it. This loss of coherence is called decoherence. A set of complex mathematical equations model decoherence. However, a simple loss of pattern shows decoherence in quantum carpets. Thus, quantum carpets are a tool to visualize and simplify decoherence.
History
While performing an experiment on optics, English physicist Henry Fox Talbot inadvertently discovered the key to quantum carpets. In this experiment, a wave struck a diffraction grating, and Talbot noticed that the patterns of grating repeated themselves with periodic regularity. This phenomenon became known as the Talbot Effect. The bands of light that Talbot discovered were never graphed on an axis, and thus, he never created a true quantum carpet. However, the bands of light were similar to the images on a quantum carpet. Centuries later, physicists graphed the Talbot effect, creating the first quantum carpet. Since then, scientists have turned to quantum carpets as visual evidence for quantum theory.
References
Quantum mechanics | Quantum carpet | [
"Physics"
] | 906 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
38,269,169 | https://en.wikipedia.org/wiki/Amurensine | Amurensine is an alkaloid found in Papaver species such as P. alpinum, P. pyrenaicum, P. suaveolens, and P. tatricum and P. nudicaule. It is a member of the isoquinoline group.
See also
C19H19NO4
References
Alkaloids found in Papaveraceae
Benzylisoquinoline alkaloids
Tropane alkaloids
Benzodioxoles
Heterocyclic compounds with 5 rings | Amurensine | [
"Chemistry"
] | 110 | [
"Alkaloids by chemical classification",
"Tropane alkaloids"
] |
38,269,892 | https://en.wikipedia.org/wiki/Crushing%20plant | A Crushing plant is one-stop crushing installation, which can be used for rock crushing, garbage crushing, building materials crushing and other similar operations. Crushing plants may be either fixed or mobile. A crushing plant has different stations (primary, secondary, tertiary, ...) where different crushing, selection and transport cycles are done in order to obtain different stone sizes or the required granulometry.
Components
Crushing plants make use of a large range of equipment, such as a pre-screener, loading conveyor, intake hopper, magnetic separator, crushing unit, such as jaw crushers and cone crusher etc.
Vibration feeder: These machines feed the jaw and impact crusher with the rocks and stones to be crushed.
Crushers: These are the machines where the rocks and stones are crushed. There are different types of crushers for different types of rocks and stones and different sizes of the input and output material. Each plant would incorporate one or several crushing machines depending on the required final material (small stones or sand).
Vibrating screen: These machines are used to separate the different sizes of the material obtained by the crushers.
Belt conveyor: These elements are the belts used for transportation of the material from one machine to another during different phases of process.
Central electric control system: Control and monitor the operation of the entire system.
Process of crushing plant
Raw materials are evenly and gradually conveyed into jaw stone crushing equipment for primary crushing via the hopper of vibrating feeder.
The crushed stone materials are conveyed to crushing plant by belt conveyor for secondary crushing before they are sent to vibrating screen to be separated.
After separating, qualified materials will be taken away as final products, while unqualified materials will be carried back to the stone crushing equipment for recrushing. And customers can classify final products according to different size ranges. All the final products are up to the related standards within and beyond India. Of course, according to different requirements, customers can adjust the size of their final products from this stone crushing plant. Process of Stone Crushing Plant
Clients will get the satisfactory products after objects being crushed for several times. Dust is generated during the working process while the dust control units are needed.
See also
Crusher
References
Mining equipment
Manufacturing | Crushing plant | [
"Engineering"
] | 448 | [
"Mining equipment",
"Manufacturing",
"Mechanical engineering"
] |
32,659,088 | https://en.wikipedia.org/wiki/Favard%27s%20theorem | In mathematics, Favard's theorem, also called the Shohat–Favard theorem, states that a sequence of polynomials satisfying a suitable three-term recurrence relation is a sequence of orthogonal polynomials. The theorem was introduced in the theory of orthogonal polynomials by and , though essentially the same theorem was used by Stieltjes in the theory of continued fractions many years before Favard's paper, and was rediscovered several times by other authors before Favard's work.
Statement
Suppose that y0 = 1, y1, ... is a sequence of polynomials where yn has degree n. If this is a sequence of orthogonal polynomials for some positive weight function then it satisfies a 3-term recurrence relation. Favard's theorem is roughly a converse of this, and states that if these polynomials satisfy a 3-term recurrence relation of the form
for some numbers cn and dn,
then the polynomials yn form an orthogonal sequence for some linear functional Λ with Λ(1)=1; in other words Λ(ymyn) = 0 if m ≠ n.
The linear functional Λ is unique, and is given by Λ(1) = 1, Λ(yn) = 0 if n > 0.
The functional Λ satisfies Λ(y) = dn Λ(y), which implies that Λ is positive definite if (and only if) the numbers cn are real and the numbers dn are positive.
See also
Jacobi operator
James Alexander Shohat
Jean Favard
References
Reprinted by Dover 2011,
Orthogonal polynomials
Theorems in approximation theory | Favard's theorem | [
"Mathematics"
] | 329 | [
"Theorems in approximation theory",
"Theorems in mathematical analysis"
] |
48,764,998 | https://en.wikipedia.org/wiki/Plant%20strategies | Plant strategies include mechanisms and responses plants use to reproduce, defend, survive, and compete on the landscape. The term “plant strategy” has existed in the literature since at least 1965, however multiple definitions exist. Strategies have been classified as adaptive strategies (through a change in the genotype), reproductive strategies, resource allocation strategies, ecological strategies, and functional trait based strategies, to name a few. While numerous strategies exist, one underlying theme is constant: plants must make trade-offs when responding to their environment. These trade-offs and responses lay the groundwork for classifying the strategies that emerge.
Background
The concept of plant strategies started gaining attention in the 1960s and 1970s. At this time, strategies were often associated with genotypic changes, such that plants could respond to their environment by changing their “genotypic programme” (i.e., strategy). Around this same time, the r/K selection theory was introduced, which classifies plants by life history strategies, particularly reproductive strategies. In general, plants alter their reproductive strategies (i.e., number of offspring) and their growth rate to respond to their ecological niche. The theory is still popular in the 21st century and frequently taught in science curricula. However, plant strategies really gained notoriety in 1977 with the introduction of Grime’s C-S-R Triangle, which categorizes plants according to how they respond under varying levels of stress and competition. According to Grime, plants develop strategies that demonstrate resource trade-offs between growth, reproduction, and maintenance. The association between genotypic change and strategies was also still present in Grime’s theories, as he noted that the “genotypes of the majority of plants appear to represent compromises between the conflicting selection pressures” that generally classify plants into three strategy types. The C-S-R Triangle remained the dominant plant strategy for several decades. However, in the early 1980s David Tilman introduced the R* theory, which focused on resource partitioning as strategies to deal with competition. More recently, additional strategies have been introduced. In 1998, the L-H-S strategy scheme was introduced as an alternative to Grime's C-S-R scheme. The L-H-S strategy focuses on leaf and seed mass traits to classify plant strategies, noting that these traits can be measured and compared between species, which cannot easily be done with Grime's abstract categories. The goal of the L-H-S scheme was to develop an international network that could provide quantifiable comparisons between plant strategies. This started a movement towards incorporating functional traits in plant strategies, and understanding how plant functional traits and environmental factors are related. While Grime's C-S-R Triangle is still frequently referenced in plant ecology, new strategies are being introduced and gaining momentum in the 21st century.
Grime's C-S-R Triangle / Universal Adaptive Strategy Theory (UAST)
J. P. Grime identified two factor gradients, broadly categorized as disturbance and stress, which limit plant biomass. Stresses include factors such as the availability of water, nutrients, and light, along with growth-inhibiting influences like temperature and toxins. Conversely, disturbance encompasses herbivory, pathogens, anthropogenic interactions, fire, wind, etc. Emerging from high and low combinations of stress and disturbance are three life strategies commonly used to categorize plants based on environment: (1) C-competitors, (2) S-stress tolerators, and (3) R-ruderals. There is no viable strategy for plants in high stress and high disturbance environments, therefore categorization for this habitat type is absent.
Each life strategy varies in trade-offs of resource allocation to seed production, leaf morphology, leaf longevity, relative growth rate, and other factors, which can be summarized as allocation to (1) growth, (2) reproduction, and (3) maintenance. Competitors are primarily composed of species with high relative growth rate, short leaf-life, relatively low seed production, and high allocation to leaf construction. They persist in high nutrient, low disturbance environments, and “rapidly monopolize resource capture by the spatially-dynamic foraging of roots and shoots.” Stress-tolerators, found in high stress, low disturbance habitats, allocate resources to maintenance and defenses, such as anti-herbivory. Species are often evergreen with small, long-lived leaves or needles, slow resource turnover, and low plasticity and relative growth rate. Due to high stress conditions, vegetative growth and reproduction are reduced. Ruderals, inhabiting low stress, high disturbance regimes, allocate resources mainly to seed reproduction and are often annuals or short-lived perennials. Common characteristics of ruderal species include high relative growth rate, short-lived leaves, and short statured plants with minimal lateral expansion.
Tilman’s R* Rule
G. David Tilman developed the R* rule in support of resource competition theory. Theoretically, a plant species growing in monoculture, and utilizing a single limiting resource, will deplete the resource until reaching an equilibrium level where growth and losses are balanced. The concentration of the resource at the equilibrium level is termed R*; this is the minimum concentration at which the plant is able to persist in the environment. Population growth is indicated by values greater than the R*. Conversely, population decline is associated with values lower than the R*. If two species are competing for the same limiting resource, the superior competitor will have the lowest R* value for that resource. This will eventually lead to the displacement of the inferior competitor, regardless of initial plant densities. Displacement rate depends on the magnitude of the difference in R*. Greater differences lead to a faster exclusion. Every plant species differs in R* values due to differences in plant morphology and physiology. The realized R* level is dependent on physical factors that vary by habitat, such as temperature, pH, and humidity.
Westoby’s L-H-S Strategy
In 1998, Mark Westoby proposed a plant ecology strategy scheme (PESS) to explain species distributions based on traits. The dynamic model incorporated a three axes trade-off among specific leaf area (SLA), canopy height at maturity, and seed mass. SLA is defined as the area per unit dry mass of mature leaves, developed in the fullest natural light of the species. These traits were selected for incorporation because of their trade-off functionality. Resource allocation to one trait is only possible by diverting resources from the others. Similarly to Grime's C-S-R triangle, each gradient represents different strategic responses to the environment; variation in disturbance adaptation is represented by canopy height and seed mass (Grime's R-axis), whereas SLA reflects variation in growth in response to stress (Grime's C-S axis). The L-H-S strategy avoids the assumption that high disturbance, high stress environments lack viable plant strategies, unlike Grime's model. However, Westoby's model is at a disadvantage when predicting potential variation in plant strategies since the axes only include single variables, compared to Grime's multivariable axes.
r/K Selection
This linear model, first introduced by MacArthur and Wilson (1967), has been commonly applied to both plants and animals to describe reproductive strategies. Representing opposing extremes of a continuum, r-species commit all energy into maximizing seed production with minimal input to individual propagules, whereas K-species allocate energy into a few, highly fit individuals; this is a spectrum of quantity versus quality. The model assumes that perfect r-species function under competitive-free environments with no density effects and K-species under maximum competitive and density saturation. Most species are categorized as intermediates between both extremes.
Summary
The term “plant strategies” has many definitions, and includes several different mechanisms for responding to one's environment. While different strategies focus on different plant characteristics, all strategies have an overarching theme: plants must make trade-offs between where and how to allocate resources. Whether that's allocation to growth, reproduction, or maintenance, plants are responding to their environment by employing strategies that allow them to persist, survive, and reproduce. Plants may have multiple strategies to survive at different life-stages and therefore be subject to multiple trade-off throughout their life-cycle.
See also
Annual vs. perennial plant evolution
References
Further reading
Shelford, V. E. 1931. Some concepts of bioecology. Ecology 12(3):455-467.
Westoby M., D. Falster, A. Moles, P. Vesk, I. Wright. 2002. Plant ecological strategies: some leading dimensions of variation between species. Annual Review of Ecology and Systematics 33:125-159.
Wright, I.J. et al. 2004. The worldwide leaf economics spectrum. Nature 428:821-827.
Craine J.M. 2005. Reconciling plant strategy theories of Grime and Tilman. Journal of Ecology 93:1041-1052.
Grime J.P. 2006. Plant Strategies, Vegetation Processes, and Ecosystem Properties. John Wiley& Sons Publishing.
Michalet R. et al. 2006. Do biotic interactions shape both sides of the humped-back model of species richness in plant communities? Ecology Letters 9:767-773.
Bornhofen S., C. Lattaud. 2008. Evolving CSR strategies in virtual plant communities. Artificial Life XI:72-79.
Laughlin, Daniel C., Plant Strategies: The Demographic Consequences of Functional Traits in Changing Environments (Oxford, 2023; online edn, Oxford Academic, 24 Aug. 2023), {{doi|10.1093/oso/9780192867940.001.0001}}, accessed 25 Oct. 2023.
Strategies | Plant strategies | [
"Biology"
] | 2,043 | [
"Plants",
"Botany"
] |
48,768,665 | https://en.wikipedia.org/wiki/Non-malleable%20code | The notion of non-malleable codes was introduced in 2009 by Dziembowski, Pietrzak, and Wichs, for relaxing the notion of error-correction and error-detection. Informally, a code is non-malleable if the message contained in a modified code-word is either the original message, or a completely unrelated value. Non-malleable codes provide a useful and meaningful security guarantee in situations where traditional error-correction and error-detection is impossible; for example, when the attacker can completely overwrite the encoded message. Although such codes do not exist if the family of "tampering functions" F is completely unrestricted, they are known to exist for many broad tampering families F.
Background
Tampering experiment
To know the operation schema of non-malleable code, we have to have a knowledge of the basic experiment it based on. The following is the three step method of tampering experiment.
A source message is encoded via a (possibly randomized) procedure , yielding a code-word = .
The code-word is modified under some tampering-function to an erroneous-code-word =.
The erroneous-code-word is decoded using a procedure , resulting in a decoded-message = .
The tampering experiment can be used to model several interesting real-world settings, such as data transmitted over a noisy channel, or adversarial tampering of data stored in the memory of a physical device. Having this experimental base, we would like to build special encoding/decoding procedures , which give us some meaningful guarantees about the results of the above tampering experiment, for large and interesting families of tampering functions. The following are several possibilities for the type of guarantees that we may hope for.
Error correction
One very natural guarantee, called error-correction, would be to require that for any tampering function and any source-message s, the tampering experiment always produces the correct decoded message .
Error detection
A weaker guarantee, called error-detection, requires that the tampering-experiment always results in either the correct value or a special symbol indicating that tampering has been detected. This notion of error-detection is a weaker guarantee than error-correction, and achievable for larger F of tampering functions.
Algorithm description
A non-malleable code ensures that either the tampering experiment results in a correct decoded-message , or the decoded-message is completely independent of and unrelated to the source-message . In other word, the notion of non-malleability for codes is similar, in spirit, to notions of non-malleability for cryptographic primitives (such as encryption2, commitments and zero-knowledge proofs), introduced by the seminal work of Dolev, Dwork and Naor.
Compared to error correction or error detection, the "right" formalization of non-malleable codes is somewhat harder to define. Let be a random variable for the value of the decoded-message, which results when we run the tampering experiment with source-message and tampering-function , over the randomness of the encoding procedure. Intuitively, we wish to say that the distribution of is independent of the encoded message . Of course, we also want to allow for the case where the tampering experiment results in (for example, if the tampering function is identity), which clearly depends on .
Thus, we require that for every tampering-function , there exists a distribution which outputs either concrete values or a special same symbol, and faithfully models the distribution of for all in the following sense: for every source message , the distributions of and are statistically close when the symbol is interpreted as . That is, correctly simulates the "outcome" of the tampering-experiment with a function without knowing the source-messages , but it is allowed some ambiguity by outputting a same symbol to indicate that the decoded-message should be the same as the source-message, without specifying what the exact value is. The fact that depends on only and not on , shows that the outcome of is independent of , exempting equality.
Relation to error correction/detection
Notice that non-malleability is a weaker guarantee than error correction/detection; the latter ensure that any change in the code-word can be corrected or at least detected by the decoding procedure, whereas the former does allow the message to be modified, but only to an unrelated value. However, when studying error correction/detection we usually restrict ourselves to limited forms of tampering which preserve some notion of distance (e.g., usually hamming distance) between the original and tampered code-word.
For example, it is already impossible to achieve error correction/detection for the simple family of functions which, for every constant , includes a "constant" function that maps all inputs to . There is always some function in that maps everything to a valid code-word . In contrast, it is trivial to construct codes that are non-malleable w.r.t , as the output of a constant function is clearly independent of its input. The prior works on non-malleable codes show that one can construct non-malleable codes for highly complex tampering function families for which error correction/detection can not be achievable.
Application over tampering functions
Bit-wise independent tampering
As one very concrete example, we study non-malleability with respect to the family of functions which specify, for each bit of the code-word , whether to keep it as is, flip it, set it to 0, set it to 1. That is, each bit of the code-word is modified arbitrarily but independently of the value of the other bits of the code-word. We call this the “bit-wise independent tampering” family . Note that this family contains constant functions and constant-error functions as subsets. Therefore, as we have mentioned, error-correction and error-detection cannot be achieved w.r.t. this family. Nevertheless, the following can show an efficient non-malleable code for this powerful family.
With we denote the family which contains all tampering functions that tamper every bit independently. Formally, this family contains all functions that are defined by n functions (for i=1...n) as . Note that there are only 4 possible choices for each (i.e. how to modify a particular bit) and we name these “set to 0”, “set to 1”, “flip”, “keep” where the meanings should be intuitive. We call the above family the bit-wise independent tampering family.
All families of bounded size
Probabilistic Method Approach
For any "small enough" function family , there exists a (possibly inefficient) coding scheme which is non-malleable w.r.t. F. Moreover, for a fixed "small enough" function family , a random coding scheme is likely to be non-malleable w.r.t. F with overwhelming probability. Unfortunately, random coding schemes cannot be efficiently represented, nor is the encoding/decoding function likely to be efficient. Therefore, this result should merely be thought of as showing "possibility" and providing a target that we should then strive to match constructively. Moreover, this result also highlights the difference between "error-correction/detection" and "non-malleability" since a result of this form could not be true for the former notions.
Random Oracle Model Approach
It is not clear what the bound from the theorem of this type actually implies. For example, it does tell us that non-malleable codes exist with respect to all efficient functions, but this is misleading as we know that efficient non-malleable codes (and ultimately we are only interested in such) cannot be non-malleable w.r.t. this class. Nevertheless, the result by the probabilistic method does give us codes which are non-malleable w.r.t. very general classes of functions in the random oracle model.
Model of tamper-resilient security
In this model, we consider two ways of interacting with the system:
Execute(): A user can provide the system with Execute(x) queries, for , in which case the system computes , updates the state of the system to and outputs .
Tamper(): We also consider tampering attacks against the system, modeled by Tamper() commands, for functions . Upon receiving such command, the system state is set to .
An attacker that can also interact with the system via Tamper queries can potentially learn significantly more about the secret state, even recover it entirely. Therefore, we would like to have a general method for securing systems against tampering attacks, so that the ability to issue Tamper queries (at least for functions f in some large family ) cannot provide the attacker with additional information. By using non-malleable code for this purpose we have the conclusion: Let be any coding scheme which is non-malleable w.r.t , then can also be tamper-simulate w.r.t. .
Capacity of non-malleable codes
For every family with , there exist non-malleable codes against with rate arbitrarily close to 1 − (this is achieved w.h.p. by a randomized construction).
For families of size against which there is no non-malleable code of rate 1 − (in fact this is the case w.h.p for a random family of this size).
1 − is the best achievable rate for the family of functions which are only allowed to tamper the first bits of the code-word, which is of special interest.
References
Algorithms | Non-malleable code | [
"Mathematics"
] | 2,046 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
48,769,660 | https://en.wikipedia.org/wiki/Kadison%E2%80%93Singer%20problem | In mathematics, the Kadison–Singer problem, posed in 1959, was a problem in functional analysis about whether certain extensions of certain linear functionals on certain C*-algebras were unique. The uniqueness was proved in 2013.
The statement arose from work on the foundations of quantum mechanics done by Paul Dirac in the 1940s and was formalized in 1959 by Richard Kadison and Isadore Singer. The problem was subsequently shown to be equivalent to numerous open problems in pure mathematics, applied mathematics, engineering and computer science. Kadison, Singer, and most later authors believed the statement to be false, but, in 2013, it was proven true by Adam Marcus, Daniel Spielman and Nikhil Srivastava, who received the 2014 Pólya Prize for the achievement.
The solution was made possible by a reformulation provided by Joel Anderson, who showed in 1979 that his "paving conjecture", which only involves operators on finite-dimensional Hilbert spaces, is equivalent to the Kadison–Singer problem. Nik Weaver provided another reformulation in a finite-dimensional setting, and this version was proved true using random polynomials.
Original formulation
Consider the separable Hilbert space ℓ2 and two related C*-algebras: the algebra of all continuous linear operators from ℓ2 to ℓ2, and the algebra of all diagonal continuous linear operators from ℓ2 to ℓ2.
A state on a C*-algebra is a continuous linear functional such that (where denotes the algebra's multiplicative identity) and for every . Such a state is called pure if it is an extremal point in the set of all states on (i.e. if it cannot be written as a convex combination of other states on ).
By the Hahn–Banach theorem, any functional on can be extended to . Kadison and Singer conjectured that, for the case of pure states, this extension is unique. That is, the Kadison–Singer problem consisted in proving or disproving the following statement:
to every pure state on there exists a unique state on that extends .
This claim is in fact true.
Paving conjecture reformulation
The Kadison–Singer problem has a positive solution if and only if the following "paving conjecture" is true:
For every there exists a natural number so that the following holds: for every and every linear operator on the -dimensional Hilbert space with zeros on the diagonal there exists a partition of into sets such that
Here denotes the orthogonal projection on the space spanned by the standard unit vectors corresponding to the elements so that the matrix of is obtained from the matrix of by replacing all rows and columns that don't correspond to the indices in The matrix norm is the spectral norm, i.e. the operator norm with respect to the Euclidean norm
Note that in this statement, may only depend on , not
Equivalent discrepancy statement
The following "discrepancy" statement, again equivalent to the Kadison–Singer problem because of previous work by Nik Weaver, was proven by Marcus/Spielman/Srivastava using a technique of random polynomials:
Suppose vectors are given with (the identity matrix) and for Then there exists a partition of into two sets and such that
This statement implies the following:
Suppose vectors are given with for and
Then there exists a partition of into two sets and such that, for :
Here the "discrepancy" becomes visible when α is small enough: the quadratic form on the unit sphere can be split into two roughly equal pieces, i.e. pieces whose values don't differ much from 1/2 on the unit sphere.
In this form, the theorem can be used to derive statements about certain partitions of graphs.
References
External links
Operator algebras
Quantum mechanics
Mathematical problems | Kadison–Singer problem | [
"Physics",
"Mathematics"
] | 767 | [
"Mathematical problems",
"Theoretical physics",
"Quantum mechanics"
] |
60,342,612 | https://en.wikipedia.org/wiki/Noise-induced%20order | Noise-induced order is a mathematical phenomenon appearing in the Matsumoto-Tsuda model of the Belosov-Zhabotinski reaction.
In this model, adding noise to the system causes a transition from a "chaotic" behaviour to a more "ordered" behaviour; this article was a seminal paper in the area and generated a big number of citations and gave birth to a line of research in applied mathematics and physics.
This phenomenon was later observed in the Belosov-Zhabotinsky reaction.
Mathematical background
Interpolating experimental data from the Belosouv-Zabotinsky reaction, Matsumoto and Tsuda introduced a one-dimensional model, a random dynamical system with uniform additive noise, driven by the map:
where
(defined so that ),
, such that lands on a repelling fixed point (in some way this is analogous to a Misiurewicz point)
(defined so that ).
This random dynamical system is simulated with different noise amplitudes using floating-point arithmetic and the Lyapunov exponent along the simulated orbits is computed; the Lyapunov exponent of this simulated system was found to transition from positive to negative as the noise amplitude grows.
The behavior of the floating point system and of the original system may differ; therefore, this is not a rigorous mathematical proof of the phenomenon.
A computer assisted proof of noise-induced order for the Matsumoto-Tsuda map with the parameters above was given in 2017.
In 2020 a sufficient condition for noise-induced order was given for one dimensional maps: the Lyapunov exponent for small noise sizes is positive, while the average of the logarithm of the derivative with respect to Lebesgue is negative.
See also
Self-organization
Stochastic Resonance
References
Non-equilibrium thermodynamics
Name reactions
Pattern formation | Noise-induced order | [
"Chemistry",
"Mathematics"
] | 376 | [
"Name reactions",
"Non-equilibrium thermodynamics",
"Dynamical systems"
] |
60,343,222 | https://en.wikipedia.org/wiki/Scanning%20vibrating%20electrode%20technique | Scanning vibrating electrode technique (SVET), also known as vibrating probe within the field of biology, is a scanning probe microscopy (SPM) technique which visualizes electrochemical processes at a sample. It was originally introduced in 1974 by Jaffe and Nuccitelli to investigate the electrical current densities near living cells. Starting in the 1980s Hugh Isaacs began to apply SVET to a number of different corrosion studies.
SVET measures local current density distributions in the solution above the sample of interest, to map electrochemical processes in situ as they occur. It utilizes a probe, vibrating perpendicular to the sample of interest, to enhance the measured signal. It is related to scanning ion-selective electrode technique (SIET), which can be used with SVET in corrosion studies, and scanning reference electrode technique (SRET), which is a precursor to SVET.
History
Scanning vibrating electrode technique was originally introduced to sensitively measure extracellular currents by Jaffe and Nuccitelli in 1974. Jaffe and Nuccitelli then demonstrated the ability of the technique through the measurement of the extracellular currents involved with amputated and re-generating newt limbs, developmental currents of chick embryos, and the electrical currents associated with amoeboid movement.
In corrosion, the scanning reference electrode technique (SRET) existed as the precursor to SVET, and was first introduced commercially and trademarked by Uniscan Instruments, now part of Bio-Logic Science Instruments. SRET is an in situ technique in which a reference electrode is scanned near a sample surface to map the potential distribution in the electrolyte above the sample. Using SRET it is possible to determine the anodic and cathodic sites of a corroding sample without the probe altering the corrosion process. SVET was first applied to and developed for the local investigation of corrosion processes by Hugh Isaacs.
Principle of Operation
SVET measures the currents associated with a sample in solution with natural electrochemical activity, or which is biased to force electrochemical activity. In both cases the current radiates into solution from the active regions of the sample. In a typical SVET instrument the probe is mounted on a piezoelectric vibrator on and x,y stage. The probe is vibrated perpendicular to the plane of the sample resulting in the measurement of an ac signal. The resulting ac signal is detected and demodulated using an input phase angle by a lock-in amplifier to produce a dc signal. The input phase angle is typically found by manually adjusting the phase input of the Lock-in Amplifier until there is no response, 90 degrees is then added to determine the optimum phase. The reference phase can also be found automatically by some commercial instruments. The demodulated dc signal which results can then be plotted to reflect the local activity distribution.
In SVET, the probe vibration results in a more sensitive measurement than its non-vibrating predecessors, as well as giving rise to an improvement of the signal-to-noise ratio. The probe vibration does not affect the process under study under normal experimental conditions.
The SVET signal is affected by a number of factors including the probe to sample distance, solution conductivity, and the SVET probe. The signal strength in a SVET measurement is influenced by the probe to sample distance. When all other variables are equal a smaller probe to sample distance will result in the measurement of a higher magnitude signal. The solution conductivity affects the signal strength in SVET measurements. With increasing solution conductivity, the signal strength of the SVET measurement decreases.
Applications
Corrosion is a major application area in for SVET. SVET is used to follow the corrosion process and provide information not possible from any other technique. In corrosion it has been used to investigate a variety of processes including, but not limited to, local corrosion, self-healing coatings, Self-Assembled Monolayers (SAMs). SVET has also been used to investigate the effect of different local features on the corrosion properties of a system. For example, using SVET, the influence of the grains and grain boundaries of X70 was measured. A difference in current densities existed between the grains and grain boundaries with the SVET data suggesting the grain was anodic, and the boundary relatively cathodic. Through the use of SVET it has been possible to investigate the effect of changing the aluminum spacer width on the galvanic coupling between steel and magnesium, a pairing which can be found on automobiles. Increasing the spacer width reduced the coupling between magnesium and steel. More generally localized corrosion processes have been followed using SVET. For a variety of systems it has been possible to use SVET to follow the corrosion front as it moves across the sample over extended periods, providing insight into the corrosion mechanism. A number of groups have used SVET to analyze the efficiency of self-healing coatings, mapping the changes in surface activity over time. When SVET measurements of the bare metals are compared to the same metal with the smart coating it can be seen that the current density is lower for the coated surface. Furthermore, when a defect is made in the smart coating the current over the defect can be seen to decrease as the coating recovers. Mekhalif et. al. have performed a number of studies on SAMs formed on different metals to investigate their corrosion inhibition using SVET. The SVET studies revealed that the bare surfaces experience corrosion, with inhomogeneous activity measured by SVET. SVET was then used to investigate the effect of modification time, and exposure to corrosive solution. When a defect free SAM was investigated SVET showed homogeneous activity.
In the field of biology the vibrating probe technique has been used to investigate a variety of processes. Vibrating probe measurements of lung cancer tumor cells have shown that the electric fields above the tumor cell were statistically larger than those measured over the intact epithelium, with the tumor cell behaving as the anode. Furthermore, it was noted that the application of an electric field resulted in the migration of the tumor cells. Using vibrating probe, the electrical currents involved in the biological processes occurring at leaves have been measured. Through vibrating probe it has been possible to correlate electrical currents with the stomatal aperture, suggesting that stomatal opening was related to proton efflux. Based on this work further vibrating probe measurements also indicated a relationship between the photosynthetic activity of a plant and the flow of electrical current on its leaf surfaces, with the measured current changing when it was exposed to different types of light and dark. As a final example, the vibrating probe technique has been used in the investigation of currents associated with wounding in plants and animals. A vibrating probe measurement of maize roots found that large inward currents were associated with wounding of the root, with the current decreasing in magnitude away from the center of the wound. When similar experiments were performed on rat skin wounds, large outward currents were measured at the wound, with the strongest current measured at the wound edge. The ability of the vibrating probe to investigate wounding has even lead to the development of a hand held prototype vibrating probe device for use.
SVET has been used to investigate the photoconductive nature of semiconductor materials, by following changes in current density related to photoelectrochemical reactions. Using SVET the lithium/organic electrolyte interface, as in lithium battery systems has also been investigated.
Although SVET has almost exclusively been applied for the measurement of samples in aqueous environments, its application in non-aqueous environments has recently been demonstrated by Bastos et al.
References
Electric current
Measuring instruments | Scanning vibrating electrode technique | [
"Physics",
"Technology",
"Engineering"
] | 1,542 | [
"Electric current",
"Wikipedia categories named after physical quantities",
"Physical quantities",
"Measuring instruments"
] |
55,527,967 | https://en.wikipedia.org/wiki/Geostationary%20Carbon%20Cycle%20Observatory | Geostationary Carbon Cycle Observatory (GeoCarb) was an intended NASA Venture-class Earth observation mission that was designed to measure the carbon cycle.
GeoCarb was to be stationed over the Americas and make observations between 50° North and South latitudes. Its primary mission was to conduct observations of vegetation health and stress, as well as observe the processes that govern the carbon exchange of carbon dioxide, methane, and carbon monoxide between the land, atmosphere, and ocean.
Selected by NASA in 2016.
Originally intended to be mounted on a commercial geostationary communication satellite operated by SES S.A., a lack of hosting opportunities drove NASA, in Feb 2022, to seek a standalone spacecraft to carry GeoCarb.
On 29 November 2022, NASA announced the cancellation of development of the GeoCarb mission, citing cost overruns and the availability of other options to measure and observe greenhouse gases, like the EMIT instrument on the ISS and the upcoming Earth System Observatory.
GeoCarb was a joint collaboration between NASA's Ames Research Center, Goddard Space Flight Center, and Jet Propulsion Laboratory; the University of Oklahoma; Colorado State University; the Lockheed Martin Advanced Technology Center of Palo Alto, California; and SES Government Solutions (now SES Space & Defense) of Reston, Florida.
GeoCarb instrument
"The GeoCarb instrument consists of the aperture assembly, telescope, spectrometer, and electronics boxes. It is a four channel near-infrared, single-slit imaging spectrograph optimized to deduce concentrations of carbon dioxide, carbon monoxide and methane, and Solar-Induced Fluorescence (SIF) from Geostationary Orbit.
The instrument is built by Lockheed Martin Advanced Technology Center."
See also
Orbiting Carbon Observatory
Orbiting Carbon Observatory 2
Space-based measurements of carbon dioxide
References
Further reading
External links
GeoCarb at NASA's Earth Observing System
GeoCarb instrument images and schematics of the GeoCarb instrument
GeoCarb at World Meteorological Organization's OSCAR
NASA programs
Spectrometers
Spacecraft instruments
Satellite meteorology
Greenhouse gases
Cancelled spacecraft | Geostationary Carbon Cycle Observatory | [
"Physics",
"Chemistry",
"Environmental_science"
] | 424 | [
"Spectrum (physical sciences)",
"Environmental chemistry",
"Greenhouse gases",
"Spectrometers",
"Spectroscopy"
] |
55,535,501 | https://en.wikipedia.org/wiki/Alpha%20nuclide | An alpha nuclide is a nuclide that consists of an integer number of alpha particles. Alpha nuclides have equal, even numbers of protons and neutrons; they are important in stellar nucleosynthesis since the energetic environment within stars is amenable to fusion of alpha particles into heavier nuclei. Stable alpha nuclides, and stable decay products of radioactive alpha nuclides, are some of the most common metals in the universe.
Alpha nuclide is also shorthand for alpha radionuclide, referring to those radioactive isotopes that undergo alpha decay and thereby emit alpha particles.
List of alpha nuclides
The entries for 36Ar and 40Ca are theoretical: they would release energy on decay, but the process has never been observed, and the half-lives are probably extremely long. Likewise, the chains for masses 64, 84, 92, and 96 theoretically can continue one more step by double electron capture (to 64Ni, 84Kr, 92Zr, and 96Mo respectively), but this has never been observed.
, the heaviest known alpha nuclide is xenon-108.
References
Nuclear physics | Alpha nuclide | [
"Physics"
] | 236 | [
"Nuclear physics"
] |
55,538,688 | https://en.wikipedia.org/wiki/MRI%20pulse%20sequence | An MRI pulse sequence in magnetic resonance imaging (MRI) is a particular setting of pulse sequences and pulsed field gradients, resulting in a particular image appearance.
A multiparametric MRI is a combination of two or more sequences, and/or including other specialized MRI configurations such as spectroscopy.
Spin echo
T1 and T2
Each tissue returns to its equilibrium state after excitation by the independent relaxation processes of T1 (spin-lattice; that is, magnetization in the same direction as the static magnetic field) and T2 (spin-spin; transverse to the static magnetic field).
To create a T1-weighted image, magnetization is allowed to recover before measuring the MR signal by changing the repetition time (TR). This image weighting is useful for assessing the cerebral cortex, identifying fatty tissue, characterizing focal liver lesions, and in general, obtaining morphological information, as well as for post-contrast imaging.
To create a T2-weighted image, magnetization is allowed to decay before measuring the MR signal by changing the echo time (TE). This image weighting is useful for detecting edema and inflammation, revealing white matter lesions, and assessing zonal anatomy in the prostate and uterus.
The standard display of MRI images is to represent fluid characteristics in black and white images, where different tissues turn out as follows:
Proton density
Proton density (PD)- weighted images are created by having a long repetition time (TR) and a short echo time (TE). On images of the brain, this sequence has a more pronounced distinction between grey matter (bright) and white matter (darker grey), but with little contrast between brain and CSF. It is very useful for the detection of arthropathy and injury.
Gradient echo
A gradient echo sequence does not use a 180 degrees RF pulse to make the spins of particles coherent. Instead, it uses magnetic gradients to manipulate the spins, allowing the spins to dephase and rephase when required. After an excitation pulse, the spins are dephased, no signal is produced because the spins are not coherent. When the spins are rephased, they become coherent, and thus signal (or "echo") is generated to form images. Unlike spin echo, gradient echo does not need to wait for transverse magnetisation to decay completely before initiating another sequence, thus it requires very short repetition times (TR), and therefore to acquire images in a short time. After echo is formed, some transverse magnetisations remains. Manipulating gradients during this time will produce images with different contrast. There are three main methods of manipulating contrast at this stage, namely steady-state free-precession (SSFP) that does not spoil the remaining transverse magnetisation, but attempts to recover them (thus producing T2-weighted images); the sequence with spoiler gradient that averages the transverse magnetisations (thus producing mixed T1 and T2-weighted images), and RF spoiler that vary the phases of RF pulse to eliminates the transverse magnetisation, thus producing pure T1-weighted images.
For comparison purposes, the repetition time of a gradient echo sequence is of the order of 3 milliseconds, versus about 30 ms of a spin echo sequence.
Inversion recovery
Inversion recovery is an MRI sequence that provides high contrast between tissue and lesion. It can be used to provide high T1 weighted image, high T2 weighted image, and to suppress the signals from fat, blood, or cerebrospinal fluid (CSF).
Diffusion weighted
Diffusion MRI measures the diffusion of water molecules in biological tissues. Clinically, diffusion MRI is useful for the diagnoses of conditions (e.g., stroke) or neurological disorders (e.g., multiple sclerosis), and helps better understand the connectivity of white matter axons in the central nervous system. In an isotropic medium (inside a glass of water for example), water molecules naturally move randomly according to turbulence and Brownian motion. In biological tissues however, where the Reynolds number is low enough for laminar flow, the diffusion may be anisotropic. For example, a molecule inside the axon of a neuron has a low probability of crossing the myelin membrane. Therefore, the molecule moves principally along the axis of the neural fiber. If it is known that molecules in a particular voxel diffuse principally in one direction, the assumption can be made that the majority of the fibers in this area are parallel to that direction.
The recent development of diffusion tensor imaging (DTI) enables diffusion to be measured in multiple directions, and the fractional anisotropy in each direction to be calculated for each voxel. This enables researchers to make brain maps of fiber directions to examine the connectivity of different regions in the brain (using tractography) or to examine areas of neural degeneration and demyelination in diseases like multiple sclerosis.
Another application of diffusion MRI is diffusion-weighted imaging (DWI). Following an ischemic stroke, DWI is highly sensitive to the changes occurring in the lesion. It is speculated that increases in restriction (barriers) to water diffusion, as a result of cytotoxic edema (cellular swelling), is responsible for the increase in signal on a DWI scan. The DWI enhancement appears within 5–10 minutes of the onset of stroke symptoms (as compared to computed tomography, which often does not detect changes of acute infarct for up to 4–6 hours) and remains for up to two weeks. Coupled with imaging of cerebral perfusion, researchers can highlight regions of "perfusion/diffusion mismatch" that may indicate regions capable of salvage by reperfusion therapy.
Like many other specialized applications, this technique is usually coupled with a fast image acquisition sequence, such as echo planar imaging sequence.
Perfusion weighted
Perfusion-weighted imaging (PWI) is performed by 3 main techniques:
Dynamic susceptibility contrast (DSC): Gadolinium contrast is injected, and rapid repeated imaging (generally gradient-echo echo-planar T2 weighted) quantifies susceptibility-induced signal loss.
Dynamic contrast enhanced (DCE): Measuring shortening of the spin–lattice relaxation (T1) induced by a gadolinium contrast bolus.
Arterial spin labelling (ASL): Magnetic labeling of arterial blood below the imaging slab, without the need of gadolinium contrast.
The acquired data is then postprocessed to obtain perfusion maps with different parameters, such as BV (blood volume), BF (blood flow), MTT (mean transit time) and TTP (time to peak).
In cerebral infarction, the penumbra has decreased perfusion. Another MRI sequence, diffusion-weighted MRI, estimates the amount of tissue that is already necrotic, and the combination of those sequences can therefore be used to estimate the amount of brain tissue that is salvageable by thrombolysis and/or thrombectomy.
Functional MRI
Functional MRI (fMRI) measures signal changes in the brain that are due to changing neural activity. It is used to understand how different parts of the brain respond to external stimuli or passive activity in a resting state, and has applications in behavioral and cognitive research, and in planning neurosurgery of eloquent brain areas. Researchers use statistical methods to construct a 3-D parametric map of the brain indicating the regions of the cortex that demonstrate a significant change in activity in response to the task. Compared to anatomical T1W imaging, the brain is scanned at lower spatial resolution but at a higher temporal resolution (typically once every 2–3 seconds). Increases in neural activity cause changes in the MR signal via T changes; this mechanism is referred to as the BOLD (blood-oxygen-level dependent) effect. Increased neural activity causes an increased demand for oxygen, and the vascular system actually overcompensates for this, increasing the amount of oxygenated hemoglobin relative to deoxygenated hemoglobin. Because deoxygenated hemoglobin attenuates the MR signal, the vascular response leads to a signal increase that is related to the neural activity. The precise nature of the relationship between neural activity and the BOLD signal is a subject of current research. The BOLD effect also allows for the generation of high resolution 3D maps of the venous vasculature within neural tissue.
While BOLD signal analysis is the most common method employed for neuroscience studies in human subjects, the flexible nature of MR imaging provides means to sensitize the signal to other aspects of the blood supply. Alternative techniques employ arterial spin labeling (ASL) or weighting the MRI signal by cerebral blood flow (CBF) and cerebral blood volume (CBV). The CBV method requires injection of a class of MRI contrast agents that are now in human clinical trials. Because this method has been shown to be far more sensitive than the BOLD technique in preclinical studies, it may potentially expand the role of fMRI in clinical applications. The CBF method provides more quantitative information than the BOLD signal, albeit at a significant loss of detection sensitivity.
Magnetic resonance angiography
Magnetic resonance angiography (MRA) is a group of techniques based to image blood vessels. Magnetic resonance angiography is used to generate images of arteries (and less commonly veins) in order to evaluate them for stenosis (abnormal narrowing), occlusions, aneurysms (vessel wall dilatations, at risk of rupture) or other abnormalities. MRA is often used to evaluate the arteries of the neck and brain, the thoracic and abdominal aorta, the renal arteries, and the legs (the latter exam is often referred to as a "run-off").
Phase contrast
Phase contrast MRI (PC-MRI) is used to measure flow velocities in the body. It is used mainly to measure blood flow in the heart and throughout the body. PC-MRI may be considered a method of magnetic resonance velocimetry. Since modern PC-MRI typically is time-resolved, it also may be referred to as 4-D imaging (three spatial dimensions plus time).
Susceptibility weighted imaging
Susceptibility-weighted imaging (SWI) is a new type of contrast in MRI different from spin density, T1, or T2 imaging. This method exploits the susceptibility differences between tissues and uses a fully velocity-compensated, three-dimensional, RF-spoiled, high-resolution, 3D-gradient echo scan. This special data acquisition and image processing produces an enhanced contrast magnitude image very sensitive to venous blood, hemorrhage and iron storage. It is used to enhance the detection and diagnosis of tumors, vascular and neurovascular diseases (stroke and hemorrhage), multiple sclerosis, Alzheimer's, and also detects traumatic brain injuries that may not be diagnosed using other methods.
Magnetization transfer
Magnetization transfer (MT) is a technique to enhance image contrast in certain applications of MRI.
Bound protons are associated with proteins and as they have a very short T2 decay they do not normally contribute to image contrast. However, because these protons have a broad resonance peak they can be excited by a radiofrequency pulse that has no effect on free protons. Their excitation increases image contrast by transfer of saturated spins from the bound pool into the free pool, thereby reducing the signal of free water. This homonuclear magnetization transfer provides an indirect measurement of macromolecular content in tissue. Implementation of homonuclear magnetization transfer involves choosing suitable frequency offsets and pulse shapes to saturate the bound spins sufficiently strongly, within the safety limits of specific absorption rate for MRI.
The most common use of this technique is for suppression of background signal in time of flight MR angiography. There are also applications in neuroimaging particularly in the characterization of white matter lesions in multiple sclerosis.
Fat suppression
Fat suppression is useful for example to distinguish active inflammation in the intestines from fat deposition such as can be caused by long-standing (but possibly inactive) inflammatory bowel disease, but also obesity, chemotherapy and celiac disease. Without fat suppression techniques, fat and fluid will have similar signal intensities on fast spin-echo sequences.
Techniques to suppress fat on MRI mainly include:
Identifying fat by the chemical shift of its atoms, causing different time-dependent phase shifts compared to water.
Frequency-selective saturation of the spectral peak of fat by a "fat sat" pulse before imaging.
Short tau inversion recovery (STIR), a T1-dependent method
Spectral presaturation with inversion recovery (SPIR)
Neuromelanin imaging
This method exploits the paramagnetic properties of neuromelanin and can be used to visualize the substantia nigra and the locus coeruleus. It is used to detect the atrophy of these nuclei in Parkinson's disease and other parkinsonisms, and also detects signal intensity changes in major depressive disorder and schizophrenia.
Uncommon and experimental sequences
The following sequences are not commonly used clinically, and/or are at an experimental stage.
T1 rho (T1ρ)
T1 rho (T1ρ) is an experimental MRI sequence that may be used in musculoskeletal imaging. It does not yet have widespread use.
Molecules have a kinetic energy that is a function of the temperature and is expressed as translational and rotational motions, and by collisions between molecules. The moving dipoles disturb the magnetic field but are often extremely rapid so that the average effect over a long time-scale may be zero. However, depending on the time-scale, the interactions between the dipoles do not always average away. At the slowest extreme the interaction time is effectively infinite and occurs where there are large, stationary field disturbances (e.g., a metallic implant). In this case the loss of coherence is described as a "static dephasing". T2* is a measure of the loss of coherence in an ensemble of spins that includes all interactions (including static dephasing). T2 is a measure of the loss of coherence that excludes static dephasing, using an RF pulse to reverse the slowest types of dipolar interaction. There is in fact a continuum of interaction time-scales in a given biological sample, and the properties of the refocusing RF pulse can be tuned to refocus more than just static dephasing. In general, the rate of decay of an ensemble of spins is a function of the interaction times and also the power of the RF pulse. This type of decay, occurring under the influence of RF, is known as T1ρ. It is similar to T2 decay but with some slower dipolar interactions refocused, as well as static interactions, hence T1ρ≥T2.
Others
Saturation recovery sequences are rarely used, but can measure spin-lattice relaxation time (T1) more quickly than an inversion recovery pulse sequence.
Double-oscillating-diffusion-encoding (DODE) and double diffusion encoding (DDE) imaging are specific forms of MRI diffusion imaging, which can be used to measure diameters and lengths of axon pores.
References
Magnetic resonance imaging
Nuclear magnetic resonance | MRI pulse sequence | [
"Physics",
"Chemistry"
] | 3,160 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging",
"Nuclear physics"
] |
53,985,417 | https://en.wikipedia.org/wiki/C19orf67 | UPF0575 protein C19orf67 is a protein which in humans (Homo sapiens) is encoded by the C19orf67 gene. Orthologs of C19orf67 are found in many mammals, some reptiles, and most jawed fish. The protein is expressed at low levels throughout the body with the exception of the testis and breast tissue. Where it is expressed, the protein is predicted to be localized in the nucleus to carry out a function. The highly conserved and slowly evolving DUFF3314 region is predicted to form numerous alpha helices and may be vital to the function of the protein.
Gene
In humans, C19orf67 is located on the minus strand of Chromosome 19 at 19p13.12 and spans 4,163 base pairs (bp).
The following genes are found near C19orf67 on the positive strand:
MISP3
Eukaryotic Translation Elongation Factor 1 Delta Pseudogene 1 (EEF1DP1)
MicroRNA 1199 (MIR1199)
The following genes are found near C19orf67 on the minus strand:
catalytic subunit α of protein kinase A (PRKACA)
Sterile Alpha Motif Domain Containing 1 (SAMD1)
mRNA transcript
C19orf67 has three transcript variants, although the second and third variants are only predicted by an Ensembl analysis and not experimentally confirmed. Only the first two variants are protein-coding transcripts.
The first transcript consists of 1447bp while the second and third have 751bp and 656bp respectively. The mature mRNA of the longest isoform is made up from 6 exons.
Protein
The unmodified protein has 358 amino acids, predicted molecular weight of 40kDa, charge of -11, and isoelectric point of 5. 44 prolines were found along the protein, making up 12.3% of the total amino acid sequence. The proline content by percentage was found to be higher in UPF0575 than 95% of comparable human proteins. However, the amount of asparagine the protein is less abundant when compared to the human proteome.
Domains
Both isoforms contain DUF3314. Although the function is not yet well understood, conservation among numerous taxa indicate that it may be important to the function of the protein. The first isoform has a non-repeating proline-rich region from amino acids 12 to 80. The function of the region is not well understood but it may be involved in preventing helices from forming due to the rigid structure of proline.
Secondary structure
A cross-program consensus predicted that UPF0575 protein C19orf67 forms four alpha helices and two beta sheets in the following locations in the amino acid sequence:
Post-translational modifications
Acetylation is likely to occur at the N-terminus while the C-terminus is unlikely to be modified. O-Glycosylation is predicted to occur at amino acids 18 and 67. Several possible phosphorylation sites were identified along with the associated kinases:
Subcellular localization
UPF0575 protein C19orf67 is expected to be targeted in the nucleus, specifically the nucleolus.
Expression and regulation
Regulation of gene expression
The promoter region is predicted to start 1,303 bp upstream from the 5' UTR and consist of 1,447 bp, causing 144 bp to overlap with the 5' UTR.
A number of transcription factors such as FOXP1, SOX5, SOX6, SOX4, and MZF1 are likely to bind with the promoter, often acting to downregulate transcription. When regarding the expression of other genes, these transcription factors typically play a role in the development of various tissues such as the heart, lung, and also take part in the differentiation of early embryonic cells, and red blood cells.
Transcriptional regulation
It is suspected that the mature mRNA of C19orf67 forms a stem loop on the 3' UTR that spans from 1,296bp to 1,350bp of the transcript.
Tissue expression
In humans, UPF0575 protein C19orf67 is highly expressed in the testis and breast tissue, although it is also expressed at low levels in the stomach, cerebral cortex, thyroid gland, and salivary gland.
The protein product is less abundant than most of the human proteome in many tissues.
Homology
Paralogs
There are no known paralogs of UPF0575 protein C19orf67, nor are there any known paralogous domains of DUF3314 found.
Orthologs
Orthologs of UPF0575 protein C19orf67 were found to be present among a wide variety of mammals with it being particularly well represented in rodentia and primates. Orthologs were also found in each reptilian order but were much more scarce in presence relative to mammals. A high number and variety of ray-finned fishes were found to have orthologs while there were fewer cartilaginous fish found to have orthologs; no jawless fish could be found with orthologs.
No orthologs are known to be present in birds or amphibians. No invertebrates, fungi, bacteria, or lower species have known orthologs.
BLAT and BLAST were used to create following table as a sample ortholog space for UPF0575 protein C19orf67. This table is not a complete list of orthologs, it is meant to display the span in which there are orthologs and the diversity of those species.
Molecular evolution
UPF0575 protein C19orf67 consists of one family and there are no apparent duplications throughout the evolution of UPF0575 protein C19orf67.
The DUF3314 region of the gene appears to have diverged at a slower rate relative to the rest of the gene, indicating that that region may have been undergoing purifying selection because that region played an important role in the protein.
Clinical significance
In one case study, C19orf67 was one of 29 genes on chromosome 19 lost due to deletions caused by chromosomal rearrangements. The rearrangements resulted neural development issues and behavioral abnormalities, although it is not known whether C19orf67 played an active role in the resulting phenotypes. In a different study, when a portion of chromosome 19 that also included C19orf67 was deleted, developmental issues such as Intrauterine growth restriction, premature birth, and muscular hypotonia, occurred.
C19orf67, among many other genes, may be used as a possible marker to detect mature beta cells.
References
Proteins | C19orf67 | [
"Chemistry"
] | 1,404 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
53,986,905 | https://en.wikipedia.org/wiki/Taylor%20scraping%20flow | In fluid dynamics, Taylor scraping flow is a type of two-dimensional corner flow occurring when one of the wall is sliding over the other with constant velocity, named after G. I. Taylor.
Flow description
Consider a plane wall located at in the cylindrical coordinates , moving with a constant velocity towards the left. Consider another plane wall(scraper), at an inclined position, making an angle from the positive direction and let the point of intersection be at . This description is equivalent to moving the scraper towards right with velocity . The problem is singular at because at the origin, the velocities are discontinuous, thus the velocity gradient is infinite there.
Taylor noticed that the inertial terms are negligible as long as the region of interest is within ( or, equivalently Reynolds number ), thus within the region the flow is essentially a Stokes flow. For example, George Batchelor gives a typical value for lubricating oil with velocity as . Then for two-dimensional planar problem, the equation is
where is the velocity field and is the stream function. The boundary conditions are
Solution
Attempting a separable solution of the form reduces the problem to
with boundary conditions
The solution is
Therefore, the velocity field is
Pressure can be obtained through integration of the momentum equation
which gives,
Stresses on the scraper
The tangential stress and the normal stress on the scraper due to pressure and viscous forces are
The same scraper stress if resolved according to Cartesian coordinates (parallel and perpendicular to the lower plate i.e. ) are
As noted earlier, all the stresses become infinite at , because the velocity gradient is infinite there. In real life, there will be a huge pressure at the point, which depends on the geometry of the contact. The stresses are shown in the figure as given in the Taylor's original paper.
The stress in the direction parallel to the lower wall decreases as increases, and reaches its minimum value at . Taylor says: "The most interesting and perhaps unexpected feature of the calculations is that does not change sign in the range . In the range the contribution to due to normal stress is of opposite sign to that due to tangential stress, but the latter is the greater. The palette knives used by artists for removing paint from their palettes are very flexible scrapers. They can therefore only be used at such an angle that is small and as will be seen in the figure this occurs only when is nearly . In fact artists instinctively hold their palette knives in this position." Further he adds "A plasterer on the other hand holds a smoothing tool so that is small. In that way he can get the large values of which are needed in forcing plaster from protuberances to hollows."
Scraping a power-law fluid
Since scraping applications are important for non-Newtonian fluid (for example, scraping paint, nail polish, cream, butter, honey, etc.,), it is essential to consider this case. The analysis was carried out by J. Riedler and Wilhelm Schneider in 1983 and they were able to obtain self-similar solutions for power-law fluids satisfying the relation for the apparent viscosity
where and are constants. The solution for the streamfunction of the flow created by the plate moving towards right is given by
where
and
where is the root of . It can be verified that this solution reduces to that of Taylor's for Newtonian fluids, i.e., when .
References
Fluid dynamics
Flow regimes | Taylor scraping flow | [
"Chemistry",
"Engineering"
] | 704 | [
"Piping",
"Chemical engineering",
"Flow regimes",
"Fluid dynamics"
] |
53,989,605 | https://en.wikipedia.org/wiki/Polyrotaxane | Polyrotaxane is a type of mechanically interlocked molecule consisting of strings and rings, in which multiple rings are threaded onto a molecular axle and prevented from dethreading by two bulky end groups. As oligomeric or polymeric species of rotaxanes, polyrotaxanes are also capable of converting energy input to molecular movements because the ring motions can be controlled by external stimulus. Polyrotaxanes have attracted much attention for decades, because they can help build functional molecular machines with complicated molecular structure.
Although there are no covalent bonds between the axes and rings, polyrotaxanes are stable due to the high free activation energy (Gibbs energy) needed to be overcome to withdraw rings from the axes. Also, rings are capable of shuttling along and rotating around the axes freely, which leads to a huge amount of internal degree of freedom of polyrotaxanes. Due to this topologically interlocked structure, polyrotaxanes have many different mechanical, electrical, and optical properties compared to conventional polymers.
Additionally the mechanically interlocked structures can be maintained in slide-ring materials, which are a type supramolecular network synthesized by crosslinking the rings (called figure-of-eight crosslinking) in different polyrotaxanes. In slide-ring materials, crosslinks of rings can pass along the axes freely to equalize the tension of the threading polymer networks, which is similar to pulleys. With this specific structure, slide-ring materials can be fabricated highly stretchable engineering materials due to their different mechanical properties.
If the rings and axes are biodegradable and biocompatible, the polyrotaxanes can also be used for biomedical application, such as gene/drug delivery. The advantage of polyrotaxanes over other biomedical polymers, such as polysaccharides, is that because the interlocked structures are maintained by bulky stoppers at the ends of the strings, if the bulky stoppers are removed, such as removed by a chemical stimulus, rings dethread from the axes. The drastic structural change can be used for programmed drug or gene delivery, in which the drug or gene can be released with the rings when the stoppers are cut off at the specific destination.
Types of polyrotaxanes
According to the location of the rotaxanes units, polyrotaxanes can be mainly divided into two types: main chain polyrotaxanes, in which the rotaxane units locate on the main chain (axis), and side chain polyrotaxanes, in which the rotaxane units are located on the side chain. Corresponding polypseudorotaxanes can also be divided based on the same principle: main chain polypseudorotaxnes or side chain polypseudorotaxanes, in which there is no stopper at the ends.
In both main chain polyrotaxanes or side chain polyrotaxanes, the unique feature from other polymers is the potential for different motion of the ring unit relative to the string units. Because the shape and location of the assembly are capable of showing different responses to changes in temperature, pH or other environment conditions, polyrotaxanes have many distinctive properties.
Main chain polyrotaxanes
Main chain polyrotaxanes are formed by host–guest interactions of polymer backbones (main chain) with cyclic molecules that are interlocked by bulky stoppers.
There are five major synthesis routes for main chain polyrotaxanes.
(1) Cyclization in the presence of main chain.
This synthesis route requires high dilution conditions of cyclization reactions. However, to most cases, it is difficult to sustain the high dilution conditions for rotaxane formation. Other possible methods to solve this problem are template cyclizations, such as cyclization based on metal chelation, change-transfer complexation or inclusion complexes.
(2) Polymerization of monomeric rotaxanes units.
Through polymerizing stable rotaxane monomers, polyrotaxanes are obtained. This method requires that the monomeric rotaxane units are stable in the solvent and have active groups that can be polymerized, which means the rings will not dethread from the main chain.
(3) Chemical conversion.
Specially designed linear polymers are required in this method. Designed monomers are polymerized to obtain special linear polymers with precursors of cyclic compounds. After bulky stoppers are modified onto two sides of polymer chains, the "temporary" chemical bonds in the precursors are cleaved to generate cyclic structure on the main chain, which becomes a polyrotaxane. The disadvantage of this method is the complex chemistry needed in the process of design and synthesis of the special linear polymers with precursors and the transitions to polyrotaxanes, e.g. the selective chemical bond cleavage. Many synthesis steps are required in this method.
(4) Threading of preformed main chain molecules through preformed rings.
The fourth approach is the simplest method to synthesize polyrotaxanes. Through mixing the main chain polymers and the rings in the solution, polyrotaxanes can be obtained after adding bulky stoppers to prevent the rings from dethreading from the chains. The number of rings on each chain depends on the threading equilibrium. Kinetic limitations due to the low concentration of chain ends and entropic effects also need further consideration. To overcome these obstacles, template threading (see below) is also a feasible alternative that can improve dynamically the number of threading rings by changing the equilibrium constant.
(5) Production of linear macromolecule in the presence of preformed rings.
Two general methods are included in this approach: the "statistical approach" and "template threading approach".
In the "statistical approach", the interaction between rings and strings is weakly attractive or repulsive or even negligible. Through employing an excess of rings, the equilibrium for threading or dethreading is forced to the threading side before polymerization. Compared with synthesis route 1, rings are a major constituent of the system instead of the rotaxanes, so the high dilution conditions are not required for this methods.
In "template threading approach", unlike the statistical approach, the interactions between rings and strings need to be attractive, such as metal chelations or charge transfer interactions which have been mentioned in the synthesis route 1. Because of this, the equilibrium is enthalpically driven, where the enthalpy is negative. In this method, high numbers of threading rings can be achieved, thus it is a useful way to stoichiometrically control the rings ratio of polyrotaxanes.
An example of the "statistical approach" is that a polyrotaxane was synthesized through polymerizing the rotaxane monomer that was assembled by oligomeric ethylene glycols (string) and crown ethers (ring) and naphthalene-1,5-di-isocyanate (stopper), which involves the threading equilibrium in the chain-ring system.
Cyclodextrins have been extensively studied as host molecules (ring) in polyrotaxanes. The poly(ethylene glycol)s can assemble with α-cyclodextrins to form a molecular necklace. Every two ethyleneoxy repeat units in poly(ethylene glycol)s can thread in one α-cyclodextrin. The models confirm that the distance of form zig-zag structure of repeat units corresponds to the size of cavity in α-cyclodextrins. This is a classical example of "template threadings" which also explains why poly(ethylene glycol)s are not able to form rotaxanes with β-cyclodextrin.
Crown ethers are another type of monomacrocyclic polymers that are used in synthesis of polyrotaxanes. The polyrotaxanes can be prepared by carrying out step-growth polymerizations in the presence of aliphatic crown ethers. In most of the cases, hydrogen bonding between the crown ethers and OH or NH/NHCO moieties are involved in the form of the assemblies. The threading efficiency will increase with the growth of sizes of the crown molecules. Additionally, stoppers will also greatly increase the threading efficiency.
Metal coordination also can be used to construct polyrotaxane structures. In this method, metal ions are employed as the synthesis templates to determine the coordinating sites of rotaxane structures. Conjugated polyrotaxanes can be synthesized through metal-template strategies followed by electropolymerization that ensures tuning of the electronic coupling between the ring cites and the conjugated backbone (string).
Side chain polyrotaxanes
Side chain polyrotaxanes are formed by host–guest interactions of polymer side chains with cyclic molecules that are interlocked by bulky stoppers.
There are mainly three types of side chain polyrotaxanes:
(1) Polyaxis/rotor: Comb-like polymers assembled with the cyclic molecules that are not interlocked on the side chain.
(2) Polyrotor/axis: polymers possess cyclic molecules on the side chain, which assemble with guest molecules to form polypseudorotaxanes.
(3) Polyrotor/polyaxis: polymers possess covalently bonded cyclic molecule-moieties assembled with polymers possess guested in the side chain.
Similar to the synthesis routes to main chain polyrotaxanes, there are mainly six approaches to side chain polyrotaxane.
(1) Ring-threading of performed graft polymer
(2) Ring-grafting
(3) Rotaxane-grafting
(4) Polymerization of macromonomer with rings
(5) Polymerization of rotaxane-monomer
(6) Chemical conversion
Similarly, the positions of chain and rings can be switched, which results in corresponding side-chain polyrotaxanes.
Properties
In a polyrotaxane, unlike a conventional polymer, the molecules are linked by mechanical bonding, such as hydrogen boding or charge transfer, not covalent bonds. Also, the rings are capable of rotating on or shuttling around the axles, resulting in the large amount of freedom of polyrotaxanes. This unconventional combination of molecules leads to the distinctive properties of polyrotaxanes.
Stability and solubility
Due to the existence of stoppers on the ends of the rotaxanes units, polyrotaxanes are more thermodynamically stable than polypseudorotaxanes (similar structure to polyrotaxane but without stoppers at two ends). Also, if the interactions between guest and host molecules are attractive, such as hydrogen bonding or charge transfer, they have better stabilities than those without attractive interactions. However, specific salts, changes of pH condition or temperature that can disturb or interrupt the interactions between ring-ring, ring-backbone, or backbone-backbone will destroy the structural integrity of polyrotaxanes. For example, dimethylformamide or dimethyl sulfoxide is able to interrupt the hydrogen bonding between cyclodextrins in the cyclodextrin-based polyrotaxanes. Also, change of pH or high temperature can also destroy the crystalline domains. Some chemical bonds between stoppers and chains are not stable in acidic or basic solution. As the stoppers cut from the chain, the rings will dethread from the axles, which leads to the dissociation of the polyrotaxanes.
For example, a "molecular necklace" assembled by α-cyclodextrins and polyethylene glycol is insoluble in water and dimethylfomamide, although their parents' components α-cyclodextrin and polyethylene glycol can be dissolved and this synthesis can happen in water. The product is soluble in dimethyl sulfoxide or 0.1 M sodium hydroxide solution. This is because the hydrogen bonding between the cyclodextrins. As the hydrogen bonding is destroyed by dimethyl sulfoxide or base solution, it can be dissolved, but the water does not deform the hydrogen interaction between cyclodextrins. In addition, the complexation process is exothermic in thermodynamic tests, which is also corresponding with the existence of hydrogen bonding.
Photoelectronic properties
One of the properties of polytorotaxanes involves the photoelectronic response when introducing photoactive or electrionic-active units into the mechanically interlocked structures.
For examples, the polyrotaxane structures are capable of enhancing the fluorescence quenching molecules that grafting on the rings and the other molecules at the ends. Amplification of a fluorescence chemosensory can be achieved by using polyrotaxane structure, which enhances the energy migration in the polymer. It was found that a rapid migration of the hole-electron pair to the rotaxanes sites is followed by a rapid combination which leads to the enhancement of the energy migration. In addition, the conductivity of these polyrotaxanes was lower than the parent components.
Also conductive polyrotaxanes can be obtained by employing metal binding in the polyrotaxanes structure. For example, a polyrotaxane containing a conjugated backbone can be synthesized through metal template and electropolymerization. The metal ion binding is reversible when another metal with stronger binding ability is employed to remove the previous ion, which results in the "scaffolding effect reversibility". The free coordination sites and the organic matrix are able to be maintained by the labile scaffolding.
Potential application
Molecular machines
Many mechanically interlocked molecules have been studied to construct molecular machines. Because the molecules are linked by mechanical bonds instead of conventional covalent bonds, a component can move (shuttle) or rotate around the other parent component, which results in the large amount of freedom of mechanically interlocked molecules. Polyrotaxanes, as the polymer form of corresponding rotaxanes, are also applied in molecular machines.
For example, the shuttling behavior of the molecular shuttle can be controlled by the solvent or temperature. Due to the hydrophobic interaction between rings and strings, and the repulsive interaction between rings and linkers, conditions that are capable of influencing these interactions can be used to control the mobility of the rings in the molecular shuttle. In addition, if cationic or anionic units are employed to form the polyrotaxanes, the salts or pH in the solution also will influence the charge interactions between rings and strings, which is an alternative method to control the ring motion of the molecular shuttle.
Poly[2]rotaxane "daisy chains" (like a string of daisies with stems linked to form a chain)is an example of a molecule that can be used to form a molecular muscle. Poly[2]rotaxane can expand or shrink in response to external stimulus, which is similar to behaviors of muscle, an ideal model to construct a "molecular muscle". The ring stations on the chain can be controlled by pH or light. Due to "daisy chain" structure, two rings on the daisy chain will shift from one station to another station, which changes the distance between two rings as well as the state of the whole daisy chain. When the rings come close, the whole size of the daisy chain will increase, which is the "expand" state. As the rings get to the further station, the molecule become the "shrink" state as the decreased size.
Slide-ring materials
By chemically crosslinking the rings contained in the polyrotaxanes, sliding gels are obtained by being topologically interlocked by figure-of-eight crosslinks. Although it is a polymer network (gel), the rings are not fixed on the polyrotaxanes in the polymer network, the crosslinks of rings are able to freely move along the polymer chain. This can equalize the tension of the network, just like a pulley manner, which is referred to pulley effect. In chemical gels, the polymer chains are easy to be broken because the lengths of the heterogeneous polymer are limited or fixed. As a result, when the chemical gel is under a high pressure, the tension can not be equalized to the whole. On the opposite, the weakest part in the network will be broken easier, which leads to the damage of the gel. However, in the slide-ring materials, the polymer chain are able to pass through the figure-of-eight crosslinks which is like pulleys, and equalize the tension of network. As a result, slide-ring materials are applied to construct highly stretchable materials, up to 24 times its length when stretching and this process can be reversible.
Drug/gene delivery
Although polyrotaxanes are formed from components, their solubilities are different from the host or guest molecules. For examples, in the cyclodextrin-based polyrotaxanes, due to the hydrophilicity or high polarity of exterior structure of the cyclodextrins, some polyrotaxanes are able to be dissolved in water or other polar solvents though the guest molecules are hydrophobic or nonpolar. These water-soluble can be applied into drug or gene carriers.
There are two main advantages for polyrotaxanes applied to drug/gene delivery:
Targeting
Because the mechanically interlocked structures are maintained by bulky stoppers at the ends of the strings, if the bulky stoppers are removed, such as removed by a chemical stimulus, rings dethread from the axes. The drastic structural change can be used for programmed drug or gene delivery, of which drug or gene can be released with the rings when the stoppers are cut off at the specific destination.
For example, an enhanced gene delivery vehicle can be obtained by using a polyrotaxane formed by rings, backbones, then stoppers that linked by a disulfide bond (or other chemical bond that can be selected cleave in the body). The cation-functionalized polyrotaxanes can bind with pDNA to form complex through the electronstatic interaction. Glutathione (or other corresponding chemicals that can cleave the sensitive chemical bond) is over-expressed in the target cells. When the polyrotaxane/plasmid DNA (pDNA) complexes are uptaken by the target cells, intercellular glutathione could cleave the disulfide bond to cut off the stoppers at the end of polyrotaxanes, which results in the dissociation of the polyrotaxanes. As the rings dethread from the chain, pDNA is released with the ring molecules.
Long-term controlled release
Another advantage of poly(pseudo)rotaxanes is the ability for long-term release of drugs or genes. Some polyrotaxanes can used to form a physical hydrogel, which is called supramolecular hydrogel. In these cases, a three-dimensional physically crosslinked network formed by the poly(pseudo)rotaxanes, can be obtained, which is able to retain a large amount of water inside this network. If water-soluble drugs or genes are added in the solution, it could be capsulated in the supramolecular hydrogels. Also, functional units can be employed in the units of the poly(pseudo)rotaxanes, which enhances the interaction between the poly(pseudo)rotaxanes and capsulated drugs/genes and provides the carriers with other predetermined functions. As the network is further swollen in the water-based environment, part of the carrier will be dissolved gradually, so the capsulated drug or gene can be released from the hydrogels over a long period of time.
See also
Polyrotaxane-based paint
References
Supramolecular chemistry | Polyrotaxane | [
"Chemistry",
"Materials_science"
] | 4,167 | [
"Nanotechnology",
"nan",
"Supramolecular chemistry"
] |
53,990,911 | https://en.wikipedia.org/wiki/Gray%20molasses | Gray molasses is a method of sub-Doppler laser cooling of atoms. It employs principles from Sisyphus cooling in conjunction with a so-called "dark" state whose transition to the excited state is not addressed by the resonant lasers. Ultracold atomic physics experiments on atomic species with poorly-resolved hyperfine structure, like isotopes of lithium
and potassium,
often utilize gray molasses instead of Sisyphus cooling as a secondary cooling stage after the ubiquitous magneto-optical trap (MOT) to achieve temperatures below the Doppler limit. Unlike a MOT, which combines a molasses force with a confining force, a gray molasses can only slow but not trap atoms; hence, its efficacy as a cooling mechanism lasts only milliseconds before further cooling and trapping stages must be employed.
Overview
Like Sisyphus cooling, the cooling mechanism of gray molasses relies on a two-photon Raman-type transition between two hyperfine-split ground states mediated by an excited state. Orthogonal superpositions of these ground states constitute "bright" and "dark" states, so called since the former couples to the excited state via dipole transitions driven by the laser, and the latter is only accessible via spontaneous emission from the excited state. As neither are eigenstates of the kinetic energy operator, the dark state also evolves into the bright state with frequency proportional to the atom's external momentum. Gradients in the polarization of the molasses beam create a sinusoidal potential energy landscape for the bright state in which atoms lose kinetic energy by traveling "uphill" to potential energy maxima that coincide with circular polarizations capable of executing electric dipole transitions to the excited state. Atoms in the excited state are then optically pumped to the dark state and subsequently evolve back to the bright state to restart the cycle. Alternately, the pair of bright and dark ground states can be generated by electromagnetically-induced transparency (EIT).
The net effect of many cycles from bright to excited to dark states is to subject atoms to Sisyphus-like cooling in the bright state and select the coldest atoms to enter the dark state and escape the cycle. The latter process constitutes velocity-selective coherent population trapping (VSCPT).
The combination of bright and dark states thus inspires the name "gray molasses."
History
In 1988, The NIST group in Washington led by William Phillips first measured temperatures below the Doppler limit in sodium atoms in an optical molasses, prompting the search for the theoretical underpinnings of sub-Doppler cooling.
The next year, Jean Dalibard and Claude Cohen-Tannoudji identified the cause as the multi-photon process of Sisyphus cooling,
and Steven Chu's group likewise modeled sub-Doppler cooling as fundamentally an optical pumping scheme.
As a result of their efforts, Phillips, Cohen-Tannoudji, and Chu jointly won the 1997 Nobel Prize in Physics. T.W. Hänsch, et al., first outlined the theoretical formulation of gray molasses in 1994,
and a four-beam experimental realization in cesium was achieved by G. Grynberg the next year.
It has since been regularly used to cool all the other alkali (hydrogenic) metals.
Comparison to Sisyphus Cooling
In Sisyphus cooling, the two Zeeman levels of a atomic ground state manifold experience equal and opposite AC Stark shifts from the near-resonant counter-propagating beams. The beams also effect a polarization gradient, alternating between linear and circular polarizations. The potential energy maxima of one coincide with pure circular polarization, which optically pumps atoms to the other , which experiences its minima in the same location. Over time, the atoms expend their kinetic energy traversing the potential energy landscape and transferring the potential energy difference between the crests and troughs of the AC-Stark-shifted ground state levels to emitted photons.
In contrast, gray molasses only has one sinusoidally light-shifted ground state; optical pumping at the peaks of this potential energy landscape takes atoms to the dark state, which can selectively evolve to the bright state and re-enter the cycle with sufficient momentum. Sisyphus cooling is difficult to implement when the excited state manifold is poorly-resolved (i.e. whose hyperfine spacing is comparable to or less than the constituent linewidths); in these atomic species, the Raman-type gray molasses is preferable.
Theory
Dressed-State Picture
Denote the two ground states and the excited state of the electron and , respectively. The atom also has overall momentum, so the overall state of the atom is a product state of its internal state and its momentum, as shown in the figure. In the presence of counter-propagating beams of opposite polarization, the internal states experience the atom-light interaction Hamiltonian
where is the Rabi frequency, approximated to be the same for both transitions. Using the definition of the translation operator in momentum space,
the effect of on the state is
This suggests the dressed state that couples to is a more convenient basis state of the two ground states. The orthogonal basis state defined below does not couple to at all.
The action of on these states is
Thus, and undergo Sisyphus-like cooling, identifying the former as the bright state. is optically inaccessible and constitutes the dark state. However, and are not eigenstates of the momentum operator, and thus motionally couple to one another via the kinetic energy term of the unperturbed Hamiltonian:
As a result of this coupling, the dark state evolves into the bright state with frequency proportional to the momentum, effectively selecting hotter atoms to re-enter the Sisyphus cooling cycle. This nonadiabatic coupling occurs predominantly at the potential minima of the light-shifted coupling state. Over time, atoms cool until they lack the momentum to traverse the sinusoidal light shift of the bright state and instead populate the dark state.
Raman Condition
The resonance condition of any -type Raman process requires that the difference in the two photon energies match the difference in energy between the states at the "legs" of the , here the ground states identified above. In experimental settings, this condition is realized when the detunings of the cycling and repumper frequencies in respect to the and transition frequencies, respectively, are equal.
Unlike most Doppler cooling techniques, light in the gray molasses must be blue-detuned from its resonant transition; the resulting Doppler heating is offset by polarization-gradient cooling. Qualitatively, this is because the choice of means that the AC Stark shifts of the three levels are the same sign at any given position. Selecting the potential energy maxima as the sites of optical pumping to the dark state requires the overall light to be blue-detuned; in doing so, the atoms in the bright state traverse the maximum potential energy difference and thus dissipate the most kinetic energy. A full quantitative explanation of the molasses force with respect to detuning can be found in Hänsch's paper.
See also
Sisyphus cooling
Raman cooling
Notes
References
Atomic physics
Cooling technology
Laser applications
Thermodynamics | Gray molasses | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,502 | [
"Dynamical systems",
"Quantum mechanics",
"Atomic physics",
"Thermodynamics",
" molecular",
"Atomic",
" and optical physics"
] |
53,991,267 | https://en.wikipedia.org/wiki/CCPForge | The Collaborative Computational Projects (CCP) group was responsible for the development of CCPForge, which is a software development tool produced through collaborations by the CCP community. CCPs allow experts in computational research to come together and develop scientific software which can be applied to numerous research fields. It is used as a tool in many research and development areas, and hosts a variety of projects. Every CCP project is the result of years of valuable work by computational researchers.
It is advised for projects to have one application, this helps users to search a category and classification system so they can find the right project for their work. Furthermore, the project can be under up to three CCPs provided it is a collaboration. Each classification category will have sub-sections to filter the category further. CCPForge projects, such provide essential information which has been used in publications such as 'Recent developments in R-matrix applications to molecular processes' and 'Ab initio derivation of Hubbard models for cold atoms in optical lattices', in which codes from CCPQ were used.
The Joint Information Systems Committee (JISC) and EPSRC both fund the CCPForge project. The Scientific Computing Department (SCD) of the Science and Technology Facilities Council is responsible for the development and maintenance of CCPForge, and this is funded by a long-term support grant from EPSRC.
Current Projects
* CCPQ was formed from CCP2 "Continuum States of Atoms and Molecules", incorporating aspects of CCP6 "Molecular Quantum Dynamics".
References
Engineering and Physical Sciences Research Council
Jisc
Science and Technology Facilities Council
Science and technology in Oxfordshire
Computational physics
Computational chemistry | CCPForge | [
"Physics",
"Chemistry"
] | 338 | [
"Theoretical chemistry",
"Computational chemistry",
"Computational physics"
] |
53,992,385 | https://en.wikipedia.org/wiki/Collaborative%20Computational%20Project%20Q | Collaborative Computational Project Q (CCPQ) was developed in order to provide software which uses theoretical techniques to catalogue collisions between electrons, positrons or photons and atomic/molecular targets. The 'Q' stands for quantum dynamics. This project is accessible via the CCPForge website, which contains numerous other projects such as CCP2 and CCP4. The scope has increased to include atoms and molecules in strong (long-pulse and attosecond) laser fields, low-energy interactions of antihydrogen with small atoms and molecules, cold atoms, Bose–Einstein condensates and optical lattices. CCPQ gives essential information on the reactivity of various molecules, and contains two community codes R-matrix suite and MCTDH wavepacket dynamics.
The project is supported by the Atomic and Molecular Physics group at Daresbury Laboratory, which supports research in core computational and scientific codes and research.
This project is a collaboration between University College London (UCL), University of Bath, and Queen's University Belfast. The project is led by Professor Graham Worth who is the Chair, alongside Vice-Chairs Dr Stephen Clark and Professor Hugo van der Hart. Quantemol Ltd is also a close partner of the project. The project is a result of the previous Collaborative Computation Project 2 (CCP2), and is an improved version of this older project. CCPQ (and its predecessor CCP2) have supported various incarnations of the UK Molecular R-matrix project for almost 40 years.
Applications
Both academic and industrial researchers use CCPQ. One of its uses is in the field of plasma research; reliable data on electron and light interactions is essential in order to model plasma processes used both on a small and large scale. Large scale industrial processes need to investigate the implementation of new methods thoroughly, and CCPQ can be used to theoretically determine the value of new processes.
CCPQ has been used to study the Hubbard models for cold atoms in optical lattices, as it provides codes used in this area of research. CCPQ hosted the necessary code on the CCPForge website, which contains other computational research projects.
References
Computational particle physics
Computational physics
Science and Technology Facilities Council
University College London | Collaborative Computational Project Q | [
"Physics"
] | 454 | [
"Computational particle physics",
"Computational physics",
"Particle physics",
"Particle physics stubs",
"Computational physics stubs"
] |
53,992,606 | https://en.wikipedia.org/wiki/Common%20Electrical%20I/O | The Common Electrical I/O (CEI) refers to a series of influential Interoperability Agreements (IAs) that have been published by the Optical Internetworking Forum (OIF). CEI defines the electrical and jitter requirements for 3.125, 6, 11, 25-28, and 56 Gbit/s electrical interfaces.
CEI, the Common Electrical I/O
The Common Electrical I/O (CEI) Interoperability Agreement published by the OIF defines the electrical and jitter requirements for 3.125, 6, 11, 25-28, and 56 Gbit/s SerDes interfaces. This CEI specification has defined SerDes interfaces for the industry since 2004, and it has been highly influential. The development of electrical interfaces at the OIF began with SPI-3 in 2000, and the first differential interface was published in 2003. The seventh generation electrical interface, CEI-56G, defines five reaches of 56 Gbit/s interfaces. The OIF completed work on its eighth generation through its CEI-112G project. The OIF has launched its ninth generation with its CEI-224G project. CEI has influenced or has been adopted or adapted in many other serial interface standards by many different standards organizations over its long lifetime. SerDes interfaces have been developed based on CEI for most ASIC and FPGA products.
CEI direct predecessors
Throughout the 2000s, the OIF produced an important series of interfaces that influenced the development of multiple generations of devices. Beginning with the donation of the PL-3 interface by PMC-Sierra in 2000, the OIF produced the System Packet Interface (SPI) family of packet interfaces. SPI-3 and SPI-4.2 defined two generations of devices before they were supplanted by the closely related Interlaken standard in the SPI-5 generation in 2006.
The OIF also defined the SerDes Framer Interface (SFI) family of specifications in parallel with SPI. As a part of the SPI-5 and SFI-5 development, a common electrical interface was developed termed SxI-5. SxI-5 abstracted the electrical I/O interface away from the individual SPI and SFI documents. This abstraction laid the groundwork for the highly successful CEI family of Interoperability Agreements and was incorporated in the original release of CEI 1.0 a generation later.
Generations of OIF Electrical Interfaces
Two earlier generations in this development path were defined by some of the same individuals at the ATM Forum in 1994 and 1995. These specifications were called UTOPIA Level 1 and 2. These operated at 25 Mbit/s (0.025 Gbit/s) and 50 Mbit/s per wire single ended and were used in OC-3 (155 Mbit/s) applications. PL-3 was a packet extension of the cells carried by those earlier interfaces.
Public demonstrations
Compliant implementations to the draft CEI-56G IAs were demonstrated in the OIF booth at the Optical Fiber Conference in 2015, 2016 and 2017.
References
Digital electronics
Ethernet
Synchronous optical networking
Fiber-optic communications | Common Electrical I/O | [
"Engineering"
] | 645 | [
"Electronic engineering",
"Digital electronics"
] |
53,992,715 | https://en.wikipedia.org/wiki/UK%20Molecular%20R-matrix%20Codes | The UK Molecular R-Matrix codes are a set of software routines used to calculate the effects of collision of electrons with atoms and molecules. The R-matrix method is used in computational quantum mechanics to study scattering of positrons and electrons by atomic and molecular targets. The fundamental idea was originally introduced by Eugene Wigner and Leonard Eisenbud in the 1940s. The method uses the fixed nuclei approximation, where the molecule's nuclei are considered fixed when collision occurs and the electronic part of the problem is solved. This information is then plugged into calculations which take into account nuclear motion. The UK Molecular R-Matrix codes were developed by the Collaborative Computational Project Q (CCPQ).
Software
The CCPQ and CCP2 have supported various incarnations of the UK Molecular R-matrix project for almost 40 years. The UK Molecular R-Matrix Group is actually a subgroup of CCP2, and their codes are maintained by Professor Jonathan Tennyson and his group of researchers. Advances in research have shown that the UK Molecular R-matrix codes can be used to explain scattering problems involving light molecular targets.
Quantemol-N (QN) is software that allows the UK molecular R-matrix codes, which is used to model electron-polyatomic molecule interactions, to be employed quickly with reduced set-up times. QN is an interface that simplifies the process of using the sophisticated UK molecular R-Matrix codes.
See also
Quantemol
References
Matrices
Particle physics
Quantum mechanics | UK Molecular R-matrix Codes | [
"Physics",
"Mathematics"
] | 304 | [
"Theoretical physics",
"Mathematical objects",
"Quantum mechanics",
"Matrices (mathematics)",
"Particle physics",
"Matrix stubs",
"Particle physics stubs",
"Quantum physics stubs"
] |
50,211,107 | https://en.wikipedia.org/wiki/Bayesian%20structural%20time%20series | Bayesian structural time series (BSTS) model is a statistical technique used for feature selection, time series forecasting, nowcasting, inferring causal impact and other applications. The model is designed to work with time series data.
The model has also promising application in the field of analytical marketing. In particular, it can be used in order to assess how much different marketing campaigns have contributed to the change in web search volumes, product sales, brand popularity and other relevant indicators. Difference-in-differences models and interrupted time series designs are alternatives to this approach. "In contrast to classical difference-in-differences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) flexibly accommodate multiple sources of variation, including the time-varying influence of contemporaneous covariates, i.e., synthetic controls."
General model description
The model consists of three main components:
Kalman filter. The technique for time series decomposition. In this step, a researcher can add different state variables: trend, seasonality, regression, and others.
Spike-and-slab method. In this step, the most important regression predictors are selected.
Bayesian model averaging. Combining the results and prediction calculation.
The model could be used to discover the causations with its counterfactual prediction and the observed data.
A possible drawback of the model can be its relatively complicated mathematical underpinning and difficult implementation as a computer program. However, the programming language R has ready-to-use packages for calculating the BSTS model, which do not require strong mathematical background from a researcher.
See also
Bayesian inference using Gibbs sampling
Correlation does not imply causation
Spike-and-slab regression
References
Further reading
Scott, S. L., & Varian, H. R. 2014a. Bayesian variable selection for nowcasting economic time series. Economic Analysis of the Digital Economy.
Scott, S. L., & Varian, H. R. 2014b. Predicting the present with bayesian structural time series. International Journal of Mathematical Modelling and Numerical Optimisation.
Varian, H. R. 2014. Big Data: New Tricks for Econometrics. Journal of Economic Perspectives
Brodersen, K. H., Gallusser, F., Koehler, J., Remy, N., & Scott, S. L. 2015. Inferring causal impact using Bayesian structural time-series models. The Annals of Applied Statistics.
R package "bsts".
R package "CausalImpact".
O’Hara, R. B., & Sillanpää, M. J. 2009. A review of Bayesian variable selection methods: what, how and which. Bayesian analysis.
Hoeting, J. A., Madigan, D., Raftery, A. E., & Volinsky, C. T. 1999. Bayesian model averaging: a tutorial. Statistical science.
Machine learning
Bayesian statistics
Time series | Bayesian structural time series | [
"Engineering"
] | 648 | [
"Artificial intelligence engineering",
"Machine learning"
] |
34,274,639 | https://en.wikipedia.org/wiki/Electromagnetic%20clutches%20and%20brakes | Electromagnetic clutches and brakes operate electrically, but transmit torque mechanically. This is why they used to be referred to as electro-mechanical clutches or brakes. Over the years, EM became known as electromagnetic versus electro mechanical, referring more about their actuation method versus physical operation. Since the clutches started becoming popular over 60 years ago, the variety of applications and brake and clutch designs has increased dramatically, but the basic operation remains the same.
This article is about the working principles of single face friction plate clutches and brakes. In this article, clutches and brakes are referred to as (mechanical) couplings.
Construction
A horseshoe magnet (A-1) has a north and south pole. If a piece of carbon steel contacts both poles, a magnetic circuit is created. In an electromagnetic coupling, the north and south pole is created by a coil shell and a wound coil.
Clutches
In a clutch, (B1) when power is applied, a magnetic field is created in the coil (A2 blue). This field (flux) overcomes an air gap between the clutch rotor (A2 yellow) and the armature (A2 red). This magnetic attraction, pulls the armature in contact with the rotor face. The frictional contact, which is being controlled by the strength of the magnetic field, is what causes the rotational motion to start.
The torque comes from the magnetic attraction, of the coil and the friction between the steel of the armature and the steel of the clutch rotor or brake field. For many industrial couplings, friction material is used between the poles. The material is mainly used to help decrease the wear rate, but different types of material can also be used to change the coefficient of friction (torque for special applications). For example, if the coupling is required to have an extended time to speed/stop or slip time, a low coefficient friction material can be used. Conversely, if a coupling is required to have a slightly higher torque (mostly for low rpm applications), a high coefficient friction material can be used.
The electromagnetic lines of flux have to attract and pull the armature in contact with it to complete engagement. Most industrial couplings use what is called a single flux, two pole design (A-2). Mobile clutches of other specialty electromagnetic clutches can use a double or triple flux rotor (A-4). The double or trip flux refers to the number of north–south flux paths (A-6), in the rotor and armature. These slots (banana slots) (A-7) create an air gap which causes the flux path to take the path of least resistance when the faces are engaged. This means that, if the armature is designed properly and has similar banana slots, what occurs is a leaping of the flux path, which goes north south, north south (A-6). By having more points of contact, the torque can be greatly increased. In theory, if there were 2 sets of poles at the same diameter, the torque would double in a clutch. Obviously, that is not possible to do, so the points of contact have to be at a smaller inner diameter. Also, there are magnetic flux losses because of the bridges between the banana slots. But by using a double flux design, a 30–50% increase in torque, can be achieved, and by using a triple flux design, a 40–90% in torque can be achieved. This is important in applications where size and weight are critical, such as automotive requirements.
The coil shell is made with carbon steel that has a combination of good strength and good magnetic properties. Copper (sometimes aluminum) magnet wire, is used to create the coil, which is held in shell either by a bobbin or by some type of epoxy/adhesive.
To help increase life in applications, friction material is used between the poles. This friction material is flush with the steel on the coil shell or rotor, since if the friction material was not flush, good magnetic traction could not occur between the faces. Some people look at electromagnetic clutches and mistakenly assume that, since the friction material is flush with the steel, that the clutch has already worn down, but this is not the case. Clutches used in most mobile applications, (automotive, agriculture, construction equipment) do not use friction material. Their cycle requirements tend to be lower than industrial clutches, and their cost is more sensitive. Also, many mobile clutches are exposed to outside elements, so by not having friction material, it eliminates the possibility of swelling (reduced torque), that can happen when friction material absorbs moisture.
Brakes
In an electromagnetic brake, the north and south pole is created by a coil shell and a wound coil. In a brake, the armature is being pulled against the brake field. (A-3) The frictional contact, which is being controlled by the strength of the magnetic field, is what causes the rotational motion to stop.
Basic operation
Engagement of clutches
The clutch has four main parts: field, rotor, armature, and hub (output) (B1). When voltage is applied the stationary magnetic field generates the lines of flux that pass into the rotor. (The rotor is normally connected to the part that is always moving in the machine.) The flux (magnetic attraction) pulls the armature in contact with the rotor (the armature is connected to the component that requires the acceleration), as the armature and the output start to accelerate. Slipping between the rotor face and the armature face continues until the input and output speed is the same (100% lockup). The actual time for this is quite short, between 1/200th of a second and 1 second.
Engagement of brakes
There are three parts to an electromagnetic brake: field, armature, and hub (which is the input on a brake) (A-3). Usually the magnetic field is bolted to the machine frame (or uses a torque arm that can handle the torque of the brake). So when the armature is attracted to the field the stopping torque is transferred into the field housing and into the machine frame decelerating the load. This can happen very fast (.1-3sec).
Disengagement
Disengagement is very simple. Once the field starts to degrade, flux falls rapidly and the armature separates. One or more springs hold the armature away from its corresponding contact surface at a predetermined air gap.
Voltage, current and the magnetic field
If a piece of copper wire was wound, around the nail and then connected to a battery, it would create an electro magnet. The magnetic field that is generated in the wire, from the current, is known as the "right hand thumb rule". (V-1) The strength of the magnetic field can be changed by changing both wire size and the amount of wire (turns). EM couplings are similar; they use a copper wire coil (sometimes aluminum) to create a magnetic field.
The fields of EM couplings can be made to operate at almost any DC voltage, and the torque produced by the clutch or brake will be the same, as long as the correct operating voltage and current is used with the correct coupling. If a 90 V clutch, a 48 V clutch and a 24 V clutch, all being powered with their respective voltages and current, all would produce the same amount of torque. However, if a 90 V clutch had 48 V applied to it, this would get about half of the correct torque output of that clutch. This is because voltage/current is almost linear to torque in DC electromagnetic couplings.
A constant power supply is ideal if accurate or maximum torque is required from a coupling. If a non-regulated power supply is used, the magnetic flux will degrade, as the resistance of the coil goes up. Basically, the hotter the coil gets the lower the torque will be, by about an average of 8% for every 20 °C. If the temperature is fairly constant, but there may not be enough service factor in your design for minor temperature fluctuation. Over-sizing, the clutch would compensate for minor flux. This will allow the use a rectified power supply which is far less expensive than a constant current supply.
Based on V = I × R, as resistance increases available current falls. An increase in resistance, often results from rising temperature as the coil heats up, according to:
Rf = Ri × [1 + αCu × (Tf - Ti)]
Where Rf = final resistance, Ri = initial resistance, αCu = copper wire's temperature coefficient of resistance, 0.0039 °C-1, Tf = final temperature, and Ti = initial temperature.
Engagement time
There are actually two engagement times to consider in an electromagnetic coupling. The first one is the time it takes for a coil to develop a magnetic field, strong enough to pull in an armature. Within this, there are two factors to consider. The first one is the amount of ampere turns in a coil, which will determine the strength of a magnetic field. The second one is air gap, which is the space between the armature and the coil shell or rotor. Magnetic lines of flux diminish quickly in the air. The further away the attractive piece is from the coil, the longer it will take for that piece to actually develop enough magnetic force to be attracted and pull in to overcome the air gap. For very high cycle applications, floating armatures can be used that rest lightly against the coil shell or rotor. In this case, the air gap is zero; but, more importantly the response time is very consistent since there is no air gap to overcome. Air gap is an important consideration especially with a fixed armature design because as the unit wears over many cycles of engagement the armature and the rotor will create a larger air gap which will change the engagement time of the clutch. In high cycle applications, where registration is important, even the difference of 10–15 milliseconds can make a difference, in registration of a machine. Even in a normal cycle application, this is important because a new machine that has accurate timing can eventually see a "drift" in its accuracy as the machine gets older.
The second factor in figuring out response time of a coupling is actually much more important than the magnet wire or the air gap. It involves calculating the amount of inertia that the coupling needs to accelerate. This is referred to as "time to speed". In reality, this is what the end-user is most concerned with. Once it is known how much inertia is present for the clutch to start or for the brake to stop, then the torque can be calculated and the appropriate size of clutch can be chosen.
Most CAD systems can automatically calculate component inertia, but the key to sizing a brake or clutch is calculating how much inertia is reflected back to the clutch or brake. To do this, engineers use the formula:
T = (J × ΔΩ) / t, where T = required braking torque (in N m), J = rotational inertia of system to be braked (in kg m2), ΔΩ = required change in rotational speed (in rad/s), and t = time during which the acceleration or deceleration must take place (in s).
There are also online sites that can help confirm how much torque is required to decelerate or accelerate a given amount of inertia over a specific time. Remember to make sure that the torque chosen, for the clutch or brake, should be after it has been burnished.
Burnishing
Burnishing is the wearing or mating of opposing surfaces. When the armature and rotor or brake faces are produced, the faces are machined as flat as possible. (Some manufacturers also lightly grind the faces to get them smoother.) But even with that the machining process leaves peaks and valleys on the surface of the steel. When a new "out of the box" coupling is initially engaged most peaks on both mating surfaces touch which means that the potential contact area can be significantly reduced. In some cases, an out of box coupling can have only 50% of its torque rating.
Burnishing is the process of cycling the coupling to wear down those initial peaks, so that there is more surface contact between the mating faces.
Even though burnishing is required to get full torque out of the coupling, it may not be required in all applications. Simply put, if the application torque is lower than the initial out of box torque of the coupling, burnishing is not required however, if the torque required is higher, then burnishing needs to be done. In general, this tends to be required more on higher torque couplings than on smaller torque couplings.
The process involves cycling the coupling a number of times at a lower inertia, lower speed or a combination of both. Burnishing can require anywhere from 20 to over 100 cycles depending upon the size of a coupling and the amount of initial torque required. For bearing mounted couplings, where the rotor and armature are connected and held in place via a bearing, burnishing does not have to take place on the machine. It can be done individually on a bench or in a group burnishing station. If a clutch has a separate armature and rotor (two piece unit) burnishing is done as a matched set, to make sure proper torque is achieved. Similarly, two-piece brakes that have separate armatures should be burnished on a machine rather than a bench because any change in the mounting tolerance as that brake is mounted to the machine may shift the alignment so the burnishing lines on the armature, rotor or brake face may be off, slightly preventing that brake from achieving full torque. Again, the difference is only slight so this would only be required in a very torque sensitive application.
Torque
Burnishing can affect initial torque of a coupling but there are also factors that affect the torque performance of a coupling in an application. The main one is voltage/current. In the voltage/current section, it was shown why a constant current supply is important to get full torque out of a coupling.
When considering torque, the question of using dynamic or static torque for the application is key. For example, if a machine is running at a relatively low rpm (5–50 depending upon size) then dynamic torque is not a consideration since the static torque rating of the coupling will come closest to where the application is running. However, if a machine is running at 3,000rpm and the same full torque is required the result will not be the same because of the difference between static and dynamic torques. Almost all manufacturers put the static rated torque for their couplings in their catalog. If a specific response time is needed, the dynamic torque rating for a particular coupling at a given speed is required. In many cases, this can be significantly lower. Sometimes it can be less than ½ of the static torque rating. Most manufacturers publish torque curves showing the relationship between dynamic and static torque for a given series of couplings. (T-1)
Over-excitation
Over-excitation is used to achieve a faster response time. It's when a coil momentarily receives a higher voltage than its nominal rating. To be effective the over excitation voltage must be significantly, but not to the point of diminishing returns, higher than the normal coil voltage. Three times the voltage typically gives around ⅓ faster response. Fifteen times the normal coil voltage will produce a 3 times faster response time. For example, a clutch coil that was rated for 6 V would need to put in 90 V to achieve the 3 times factor.
With over-excitation the in-rush voltage is momentary. Although it would depend upon the size of the coil the actual time is usually only a few milliseconds. The theory is, for the coil to generate as much of a magnetic field as quickly as possible to attract the armature and start the process of acceleration or deceleration. Once the over excitation is no longer required the power supply to the clutch or brake would return to its normal operating voltage. This process can be repeated a number of times as long as the high voltage does not stay in the coil long enough to cause the coil wire to overheat.
Wear
It is very rare that a coil would just stop working in an electromagnetic coupling . Typically, if a coil fails it is usually due to heat which has caused the insulation of the coil wire to break down. The heat can be caused by high ambient temperature, high cycle rates, slipping or applying too high of a voltage. Bushings can be used in some clutches that have low speed, low side loads or low operating hours. At higher loads and speeds, bearing mounted field/rotors and hubs are a better option. Most brakes are flanged mounted and have bearings but some brakes are bearing mounted. Like the coils, unless bearings are stressed beyond their physical limitations or become contaminated, they tend to have a long life and they are usually the second item to wear out.
The main wear in electromagnetic couplings occurs on the faces of the mating surfaces. Every time a coupling is engaged during rotation a certain amount of energy is transferred as heat. The transfer, which occurs during rotation wears both the armature and the opposing contact surface. Based upon the size of the clutch or brake, the speed and the inertia, wear rates will differ. For example, a machine that was running at 500 rpm with a clutch and is now sped up to 1000 rpm would have its wear rate significantly increased because the amount of energy required to start the same amount of inertia is a lot higher at the higher speed. With a fixed armature design a coupling will eventually simply cease to engage. This is because the air gap will eventually become too large for the magnetic field to overcome. Zero gap or auto wear armatures can wear to the point of less than one half of its original thickness, which will eventually cause missed engagements.
Designers can estimate life from the energy transferred each time the brake or clutch engages.
Ee = [m × v2 × τd] / [182 × (τd + τl)]
Where Ee = energy per engagement, m = inertia, v = speed, τd = dynamic torque, and τl = load torque.
Knowing the energy per engagement lets the designer calculate the number of engagement cycles the clutch or brake will last:
L = V / (Ee × w)
Where L = unit life in number of cycles, V = total engagement area, and w = wear rate.
Backlash
Some applications require very tight precision between all components. In these applications, even 1° of movement between the input and the output when a coupling is engaged can be a problem. This is true in many robotic applications. Sometimes the design engineers will order clutches or brakes with zero backlash but then key them to the shafts so although the clutch or brake will have zero backlash there's still minimal movement occurring between the hub or rotor in the shaft.
Most applications, however, do not need true zero backlash and can use a spline type connection. Some of these connections between the armature and the hub are standard splines others are hex or square hub designs. The spline will have the best initial backlash tolerance. Typically around 2° but the spline and the other connection types can wear over time and the tolerances will increase.
Environment and contamination
As couplings wear they create wear particles. In some applications such as clean rooms or food handling this dust could be a contamination problem so in these applications the coupling should be enclosed to prevent the particles from contaminating other surfaces around it. But a more likely scenario is that the coupling has a better chance of getting contaminated from its environment. Obviously oil or grease should be kept away from the contact surface because they would significantly reduce the coefficient of friction which could drastically decrease the torque potentially causing failure. Oil mist or lubricated particles can also cause surface contamination. Sometimes paper dust or other contamination can fall in between the contact surfaces. This can also result in a loss of torque. If a known source of contamination is going to be present many clutch manufactures offer contamination shields that prevent material from falling in between the contact surfaces.
In clutches and brakes that have not been used in a while, rust can develop on the surfaces. But in general, this is normally not a major concern since the rust is worn off within a few cycles and there is no lasting impact on the torque.
See also
Electromagnetic clutch
Electromagnetic brake
References
External links
The Basics of Electromagnetic Clutches and Brakes
Getting A Grip on Clutch and Brake Selection
Floating Armature revs up Clutch Brake System
Inertia-calc.com Inertia calculator
Clutches
Brakes
Electromagnetic brakes and clutches
de:Magnetkupplung | Electromagnetic clutches and brakes | [
"Engineering"
] | 4,258 | [
"Electromagnetic brakes and clutches",
"Mechanical engineering"
] |
34,278,662 | https://en.wikipedia.org/wiki/HKDF | HKDF is a simple key derivation function (KDF) based on the HMAC message authentication code. It was initially proposed by its authors as a building block in various protocols and applications, as well as to discourage the proliferation of multiple KDF mechanisms. The main approach HKDF follows is the "extract-then-expand" paradigm, where the KDF logically consists of two modules: the first stage takes the input keying material and "extracts" from it a fixed-length pseudorandom key, and then the second stage "expands" this key into several additional pseudorandom keys (the output of the KDF).
It can be used, for example, to convert shared secrets exchanged via Diffie–Hellman into key material suitable for use in encryption, integrity checking or authentication.
It is formally described in RFC 5869. One of its authors also described the algorithm in a companion paper in 2010.
NIST SP800-56Cr2 specifies a parameterizable extract-then-expand scheme, noting that RFC 5869 HKDF is a version of it and citing its paper for the rationale for the recommendations' extract-and-expand mechanisms.
There are implementations of HKDF for C#, Go, Java, JavaScript, Perl, PHP, Python, Ruby, Rust, and other programming languages.
Mechanism
HKDF is the composition of two functions, HKDF-Extract and HKDF-Expand: HKDF(salt, IKM, info, length) = HKDF-Expand(HKDF-Extract(salt, IKM), info, length)
HKDF-Extract
HKDF-Extract takes "input key material" (IKM) such as a shared secret generated using Diffie-Hellman, and an optional salt, and generates a cryptographic key called the PRK ("pseudorandom key"). This acts as a "randomness extractor", taking a potentially non-uniform value of high min-entropy and generating a value indistinguishable from a uniform random value.
HKDF-Extract is the output of HMAC with the "salt" as the key and the "IKM" as the message.
HKDF-Expand
HKDF-Expand takes the PRK, some "info", and a length, and generates output of the desired length. HKDF-Expand acts as a pseudorandom function keyed on PRK. This means that multiple outputs can be generated from a single IKM value by using different values for the "info" field.
HKDF-Expand works by repeatedly calling HMAC using the PRK as the key and the "info" field as the message. The HMAC inputs are chained by prepending the previous hash block to the "info" field and appending with an incrementing 8-bit counter.
Example: Python implementation
#!/usr/bin/env python3
import hashlib
import hmac
hash_function = hashlib.sha256 # RFC5869 also includes SHA-1 test vectors
def hmac_digest(key: bytes, data: bytes) -> bytes:
return hmac.new(key, data, hash_function).digest()
def hkdf_extract(salt: bytes, ikm: bytes) -> bytes:
if len(salt) == 0:
salt = bytes([0] * hash_function().digest_size)
return hmac_digest(salt, ikm)
def hkdf_expand(prk: bytes, info: bytes, length: int) -> bytes:
t = b""
okm = b""
i = 0
while len(okm) < length:
i += 1
t = hmac_digest(prk, t + info + bytes([i]))
okm += t
return okm[:length]
def hkdf(salt: bytes, ikm: bytes, info: bytes, length: int) -> bytes:
prk = hkdf_extract(salt, ikm)
return hkdf_expand(prk, info, length)
okm = hkdf(
salt=bytes.fromhex("000102030405060708090a0b0c"),
ikm=bytes.fromhex("0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b"),
info=bytes.fromhex("f0f1f2f3f4f5f6f7f8f9"),
length=42,
)
assert okm == bytes.fromhex(
"3cb25f25faacd57a90434f64d0362f2a"
"2d2d0a90cf1a5a4c5db02d56ecc4c5bf"
"34007208d5b887185865"
)
# Zero-length salt
assert hkdf(
salt=b"",
ikm=bytes.fromhex("0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b"),
info=b"",
length=42,
) == bytes.fromhex(
"8da4e775a563c18f715f802a063c5a31"
"b8a11f5c5ee1879ec3454e5f3c738d2d"
"9d201395faa4b61a96c8"
)
References
External links
Cryptography
Key derivation functions | HKDF | [
"Mathematics",
"Engineering"
] | 1,232 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
34,280,377 | https://en.wikipedia.org/wiki/Types%20of%20press%20tools | Press tools are commonly used in hydraulic, pneumatic, and mechanical presses to produce the sheet metal components in large volumes. Generally press tools are categorized by the types of operation performed using the tool, such as blanking, piercing, bending, forming, forging, trimming etc. The press tool will also be specified as a blanking tool, piercing tool, bending tool etc.
Classification of press tools
Blanking tool
Blanking is a punching operation in which the entire periphery is cut out and the cutout portion required is known as STAMP or BLANK.
When a component is produced with one single punch and die where the entire outer profile is cut in a single stroke the tool is called a blanking tool.
Blanking is the operation of cutting flat shapes from sheet metal.
The outer area of metal remaining after a blanking operation is generally discarded as waste.
Size of blank or product is the size of the die & clearance is given on punch.
It is a metal cutting operation.
In blanking, metal obtained after cutting is not a scrap if it is usable.
The size of the blank depends on the size of the die.
So the size of the die opening is equal to the blank size.
Clearance is given to the punch.
Blank sheet to used as final product.
Piercing tool
Piercing involves cutting of clean holes with a resulting scrap slug. The operation is called die cutting and can also produce flat components where the die, the shaped tool, is pressed into a sheet material employing a shearing action to cut holes. This method can be used to cut parts of different sizes and shapes in sheet metal, leather and many other materials.
Cut off tool
It is a shearing operation in which blanks are separated from a sheet metal strip by cutting the opposite sides of the part in sequence.
Parting off tool
This is similar to a cutoff tool, in that a discrete part is cut from a sheet or strip of metal along a desired geometric path. The difference between a cutoff and a parting is that a cutoff can be nestled perfectly on the sheet metal, due to its geometry. With cutoffs, the cutting of sheet metal can be done over one path at a time and there is practically no waste of material. With partings, the shape can not be nestled precisely. Parting involves cutting the sheet metal along two paths simultaneously. Partings waste a certain amount of material, that can be significant.
Trimming tool
When cups and shells are drawn from flat sheet metal, the edge is left wavy and irregular, due to uneven flow of metal. Shown is flanged shell, as well as the trimmed ring removed from around the edge. While a small amount of material is removed from the side of a component in trimming tool.
hlo every one triming tool is nothing bt cleaning tool like where the extyra curve or some we doesnt want that removing proces call as triming tool
Shaving tool
Shaving removes a small amount of material around the edges of a previously blanked stampings or piercing. A straight, smooth edge is provided and therefore shaving is frequently performed on instrument parts, watch and clock parts and the like. Shaving is accomplished in shaving tools especially designed for the purpose. It is also required proper die clearance.
Forming tool
Forming is the operation of deforming a part in curved profile.
Forming tools apply more complex forms to work pieces. The line of bend is curved instead of straight, and the metal is subjected to plastic flow or deformation.
Drawing tool
Drawing tools transform flat sheets of metal into cups, shells or other drawn shapes by subjecting the material to severe plastic deformation. Shown in fig is a rather deep shell that has been drawn from a flat sheet. It is an axial elongation through the application of axial force.
This type of press tool is used to perform only one particular operation therefore classified under stage tools.
Progressive tool
A progressive tool differs from a stage tool in the following respect: in a progressive tool, the final component is obtained by progressing the sheet metal or strip in more than one stage. At each stage, the tool will progressively shape the component towards its final shape, with the final stage normally being cutting-off.
Compound tool
The compound tool differs from progressive and stage tools by the arrangement of the punch and die. It is an inverted tool where blanking and piercing takes place in a single stage and also the blanking punch will act as the piercing die. That means punch will be to the bottom side of the tool and piercing punches to top side of the tool. The burr forms only one side.
Combination tool
In a combination tool, two or more operations such as bending and trimming will be performed simultaneously. Two or more operations such as forming, drawing, extruding, embossing may be combined on the component with various cutting operations like blanking, piercing, broaching and cut off takes place- it can perform a cutting and non-cutting operations in a single tool.
General press tool construction
The general press tool construction will have the following elements:
Shank: It is used as a part for installing the Press tool die in the slide of the press machine with proper alignment.
Top Plate: It is used to hold top half of the press tool with press slide. It is also called Bolster Plate.
Punch Back Plate : This plate prevents the hardened punches penetrating into top plate. It is also called Pressure Plate or Backup plate.
Punch Holder: This plate is used to accommodate the punches of press tool.
Punches : To perform cutting and non cutting operations either plain or profiled punches are used.
Die Plate: Die plate will have similar profile of the component where cutting dies usually have holes with land and angular clearance and non cutting dies will have profiles.
Die Back Plate:This plate prevents the hardened Die inserts penetrating into bottom plate.
Guide Pillar & Guide Bush : Used for alignment between top and bottom halves of the press tools.
Bottom plate:It is used to hold bottom half of the press tool with press slide.
Stripper plate: it is used to strip off the component from punches.
Strip guides: It is used to guide the strip into the press tool to perform the operation.
Cutting force in press tool
In general, cutting force can be calculated using the formula:
CF =L x T x ζmax
Cutting force will be in Newton (N)
Where,
L = Cut length in mm,(perimeter of profile to be cut)
Ex: 40 mm square to be cut will have cut length of 160 mm
T = Sheet metal thickness in mm,
ζmax = Maximum shear strength of sheet metal in MPa
Stripping force
Stripping force is the force required to eject the strip from the punches, which helps the strip to go forward for the next operation Stripping force will be usually 10 to 20% of cutting force (CF)
Press force
The Press force is the cutting force added to the stripping force:
Press Force = Cutting force + Stripping force
Fits in press tools
Punch holder and Punches =H7/k6
Punch and Stripper = H7/k6
Guide Pillar and Guide bush = H7/g6
Guide bush and Top plate = H7/p6
Guide pillar and bottom plate = H7/p6
Dowel and plate = H7/m6
Dowel holes = H7/m6
References
Machine tools
Presswork | Types of press tools | [
"Engineering"
] | 1,481 | [
"Machine tools",
"Industrial machinery"
] |
56,907,265 | https://en.wikipedia.org/wiki/Clarke%27s%20equation | In combustion, Clarke's equation is a third-order nonlinear partial differential equation, first derived by John Frederick Clarke in 1978. The equation describes the thermal explosion process, including both effects of constant-volume and constant-pressure processes, as well as the effects of adiabatic and isothermal sound speeds. The equation reads as
or, alternatively
where is the non-dimensional temperature perturbation, is the specific heat ratio and is the relevant Damköhler number. The term describes the thermal explosion at constant pressure and the term describes the thermal explosion at constant volume. Similarly, the term describes the wave propagation at adiabatic sound speed and the term describes the wave propagation at isothermal sound speed. Molecular transports are neglected in the derivation.
It may appear that the parameter can be removed from the equation by the transformation , it is, however, retained here since may also appear in the initial and boundary conditions.
Example: Fast, non-diffusive ignition by deposition of a radially symmetric hot source
Suppose a radially symmetric hot source is deposited instantaneously in a reacting mixture. When the chemical time is comparable to the acoustic time, diffusion is neglected so that ignition is characterised by heat release by the chemical energy and cooling by the expansion waves. This problem is governed by the Clarke's equation with , where is the maximum initial temperature, is the temperature and is the Frank-Kamenetskii temperature ( is the gas constant and is the activation energy). Furthermore, let denote the distance from the center, measured in units of initial hot core size and be the time, measured in units of acoustic time. In this case, the initial and boundary conditions are given by
where , respectively, corresponds to the planar, cylindrical and spherical problems. Let us define a new variable
which is the increment of from its distant values. Then, at small times, the asymptotic solution is given by
As time progresses, a steady state is approached when and a thermal explosion is found to occur when , where is the Frank-Kamenetskii parameter; if , then in the planar case, in the cylindrical case and in the spherical case. For , the solution in the first approximation is given by
which shows that thermal explosion occurs at , where is the ignition time.
Generalised form
For generalised form for the reaction term, one may write
where is arbitrary function representing the reaction term.
See also
Frank-Kamenetskii theory
References
Partial differential equations
Fluid dynamics
Combustion | Clarke's equation | [
"Chemistry",
"Engineering"
] | 504 | [
"Piping",
"Chemical engineering",
"Combustion",
"Fluid dynamics"
] |
56,912,080 | https://en.wikipedia.org/wiki/Soil%20moisture%20velocity%20equation | The soil moisture velocity equation describes the speed that water moves vertically through unsaturated soil under the combined actions of gravity and capillarity, a process known as infiltration. The equation is alternative form of the Richardson/Richards' equation. The key difference being that the dependent variable is the position of the wetting front , which is a function of time, the water content and media properties. The soil moisture velocity equation consists of two terms. The first "advection-like" term was developed to simulate surface infiltration and was extended to the water table, which was verified using data collected in a column experimental that was patterned after the famous experiment by Childs & Poulovassilis (1962) and against exact solutions.
Soil moisture velocity equation
The soil moisture velocity equation or SMVE is a Lagrangian reinterpretation of the Eulerian Richards' equation wherein the dependent variable is the position z of a wetting front of a particular moisture content with time.
where:
is the vertical coordinate [L] (positive downward),
is the water content of the soil at a point [-]
is the unsaturated hydraulic conductivity [L T−1],
is the capillary pressure head [L],
is the soil water diffusivity, which is defined as: , [L2 T]
is time [T].
The first term on the right-hand side of the SMVE is called the "advection-like" term, while the second term is called the "diffusion-like" term. The advection-like term of the Soil Moisture Velocity Equation is particularly useful for calculating the advance of wetting fronts for a liquid invading an unsaturated porous medium under the combined action of gravity and capillarity because it is convertible to an ordinary differential equation by neglecting the diffusion-like term. and it avoids the problem of representative elementary volume by use of a fine water-content discretization and solution method.
This equation was converted into a set of three ordinary differential equations (ODEs) using the method of lines to convert the partial derivatives on the right-hand side of the equation into appropriate finite difference forms. These three ODEs represent the dynamics of infiltrating water, falling slugs, and capillary groundwater, respectively.
Derivation
This derivation of the 1-D soil moisture velocity equation for calculating vertical flux of water in the vadose zone starts with conservation of mass for an unsaturated porous medium without sources or sinks:
We next insert the unsaturated Buckingham–Darcy flux:
yielding Richards' equation in mixed form because it includes both the water content and capillary head :
.
Applying the chain rule of differentiation to the right-hand side of Richards' equation:
.
Assuming that the constitutive relations for unsaturated hydraulic conductivity and soil capillarity are solely functions of the water content, and , respectively:
.
This equation implicitly defines a function that describes the position of a particular moisture content within the soil using a finite moisture-content discretization. Employing the Implicit function theorem, which by the cyclic rule required dividing both sides of this equation by to perform the change in variable,
resulting in:
,
which can be written as:
.
Inserting the definition of the soil water diffusivity:
into the previous equation produces:
If we consider the velocity of a particular water content , then we can write the equation in the form of the Soil Moisture Velocity Equation:
Physical significance
Written in moisture content form, 1-D Richards' equation is
Where D(θ) [L2/T] is 'the soil water diffusivity' as previously defined.
Note that with as the dependent variable, physical interpretation is difficult because all the factors that affect the divergence of the flux are wrapped up in the soil moisture diffusivity term . However, in the SMVE, the three factors that drive flow are in separate terms that have physical significance.
The primary assumptions used in the derivation of the Soil Moisture Velocity Equation are that and are not overly restrictive. Analytical and experimental results show that these assumptions are acceptable under most conditions in natural soils. In this case, the Soil Moisture Velocity Equation is equivalent to the 1-D Richards' equation, albeit with a change in dependent variable. This change of dependent variable is convenient because it reduces the complexity of the problem because compared to Richards' equation, which requires the calculation of the divergence of the flux, the SMVE represents a flux calculation, not a divergence calculation. The first term on the right-hand side of the SMVE represents the two scalar drivers of flow, gravity and the integrated capillarity of the wetting front. Considering just that term, the SMVE becomes:
where is the capillary head gradient that is driving the flux and the remaining conductivity term represents the ability of gravity to conduct flux through the soil. This term is responsible for the true advection of water through the soil under the combined influences of gravity and capillarity. As such, it is called the "advection-like" term.
Neglecting gravity and the scalar wetting front capillarity, we can consider only the second term on the right-hand side of the SMVE. In this case the Soil Moisture Velocity Equation becomes:
This term is strikingly similar to Fick's second law of diffusion. For this reason, this term is called the "diffusion-like" term of the SMVE.
This term represents the flux due to the shape of the wetting front , divided by the spatial gradient of the capillary head . Looking at this diffusion-like term, it is reasonable to ask when might this term be negligible? The first answer is that this term will be zero when the first derivative , because the second derivative will equal zero. One example where this occurs is in the case of an equilibrium hydrostatic moisture profile, when with z defined as positive upward. This is a physically realistic result because an equilibrium hydrostatic moisture profile is known to not produce fluxes.
Another instance when the diffusion-like term will be nearly zero is in the case of sharp wetting fronts, where the denominator of the diffusion-like term , causing the term to vanish. Notably, sharp wetting fronts are notoriously difficult to resolve and accurately solve with traditional numerical Richards' equation solvers.
Finally, in the case of dry soils, tends towards , making the soil water diffusivity tend towards zero as well. In this case, the diffusion-like term would produce no flux.
Comparing against exact solutions of Richards' equation for infiltration into idealized soils developed by Ross & Parlange (1994) revealed that indeed, neglecting the diffusion-like term resulted in accuracy >99% in calculated cumulative infiltration. This result indicates that the advection-like term of the SMVE, converted into an ordinary differential equation using the method of lines, is an accurate ODE solution of the infiltration problem. This is consistent with the result published by Ogden et al. who found errors in simulated cumulative infiltration of 0.3% using 263 cm of tropical rainfall over an 8-month simulation to drive infiltration simulations that compared the advection-like SMVE solution against the numerical solution of Richards' equation.
Solution
The advection-like term of the SMVE can be solved using the method of lines and a finite moisture content discretization. This solution of the SMVE advection-like term replaces the 1-D Richards' equation PDE with a set of three ordinary differential equations (ODEs). These three ODEs are:
Infiltration fronts
With reference to Figure 1, water infiltrating the land surface can flow through the pore space between and . Using the method of lines to convert the SMVE advection-like term into an ODE:
Given that any ponded depth of water on the land surface is , the Green and Ampt (1911) assumption is employed,
represents the capillary head gradient that is driving the flow in the discretization or "bin". Therefore, the finite water-content equation in the case of infiltration fronts is:
Falling slugs
After rainfall stops and all surface water infiltrates, water in bins that contains infiltration fronts detaches from the land surface. Assuming that the capillarity at leading and trailing edges of this 'falling slug' of water is balanced, then the water falls through the media at the incremental conductivity associated with the bin:
.
This approach to solving the capillary-free solution is very similar to the kinematic wave approximation.
Capillary groundwater fronts
In this case, the flux of water to the bin occurs between bin j and i. Therefore, in the context of the method of lines:
and
which yields:
Note the "-1" in parentheses, representing the fact that gravity and capillarity are acting in opposite directions. The performance of this equation was verified, using a column experiment fashioned after that by Childs and Poulovassilis (1962). Results of that validation showed that the finite water-content vadose zone flux calculation method performed comparably to the numerical solution of Richards' equation. The photo shows apparatus. Data from this column experiment are available by clicking on this hot-linked DOI. These data are useful for evaluating models of near-surface water table dynamics.
It is noteworthy that the SMVE advection-like term solved using the finite moisture-content method completely avoids the need to estimate the specific yield. Calculating the specific yield as the water table nears the land surface is made cumbersome my non-linearities. However, the SMVE solved using a finite moisture-content discretization essentially does this automatically in the case of a dynamic near-surface water table.
Notice and awards
The paper on the Soil Moisture Velocity Equation was highlighted by the editor in the issue of J. Adv. Modeling of Earth Systems when the paper was first published, and is in the public domain. The paper may be freely downloaded here by anyone.
The paper describing the finite moisture-content solution of the advection-like term of the Soil Moisture Velocity Equation was selected to receive the 2015 Coolest Paper Award by the early career members of the International Association of Hydrogeologists.
References
External links
YouTube video of SMVE-based solution slowed during rainfall to highlight behavior, with fixed water table at 1.0 m and evapotranspiration from a 0.5 m root zone
Hydrology
Soil physics
Partial differential equations
Ordinary differential equations | Soil moisture velocity equation | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,167 | [
"Environmental engineering",
"Hydrology",
"Applied and interdisciplinary physics",
"Soil physics"
] |
56,913,000 | https://en.wikipedia.org/wiki/Hawking%E2%80%93Page%20phase%20transition | In quantum gravity, the Hawking–Page phase transition is phase transition between AdS black holes with radiation and thermal AdS.
Stephen Hawking and Don Page () showed that although AdS black holes can be in stable thermal equilibrium with radiation, they are not the preferred state below a certain critical temperature . At this temperature, there will be a first order phase transition where below , thermal AdS will become the dominant contribution to the partition function.
The Hawking-Page phase transition between the unstable small black hole to stable large black hole phase is understood as a confinement-deconfinement phase transition in the dual conformal field theory via AdS/CFT correspondence.
References
Black holes
Quantum gravity | Hawking–Page phase transition | [
"Physics",
"Astronomy"
] | 138 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Quantum mechanics",
"Astrophysics",
"Astronomy stubs",
"Quantum gravity",
"Astrophysics stubs",
"Stellar astronomy stubs",
"Density",
"Relativity stubs",
"Theory of relativity",
"Stellar phenomena"... |
56,916,503 | https://en.wikipedia.org/wiki/19-Nordehydroepiandrosterone | 19-Nordehydroepiandrosterone (19-nor-DHEA), or 19-nor-5-dehydroepiandrosterone (19-nor-5-DHEA), is an estrane (19-norandrostane) steroid which was never marketed. It is the combined derivative of the androgen/anabolic steroid nandrolone (19-nortestosterone) and the androgen prohormone dehydroepiandrosterone (DHEA, or more specifically 5-DHEA). Related compounds include 19-nor-5-androstenediol, bolandiol (19-nor-4-androstenediol), and bolandione (nor-4-androstenedione), which are all known orally active prohormones of nandrolone. 19-Nor-DHEA may occur as a metabolite of bolandione and related steroids.
See also
List of androgens/anabolic steroids
References
Abandoned drugs
Secondary alcohols
Anabolic–androgenic steroids
Estranes
Estrogens
Sex hormone esters and conjugates
Progestogens | 19-Nordehydroepiandrosterone | [
"Chemistry"
] | 256 | [
"Drug safety",
"Abandoned drugs"
] |
56,917,808 | https://en.wikipedia.org/wiki/C-C%20motif%20chemokine%20ligand%2027 | C-C motif chemokine ligand 27 is a protein that in humans is encoded by the CCL27 gene.
Function
This gene is one of several CC cytokine genes clustered on the p-arm of chromosome 9. Cytokines are a family of secreted proteins involved in immunoregulatory and inflammatory processes. The CC cytokines are proteins characterized by two adjacent cysteines. The protein encoded by this gene is chemotactic for skin-associated memory T lymphocytes. CCL27 is associated with homing of memory T lymphocytes to the skin, and plays a role in T cell-mediated inflammation of the skin. CCL27 is expressed in numerous tissues, including gonads, thymus, placenta and skin. It elicits its chemotactic effects by binding to the chemokine receptor CCR10. The gene for CCL27 is located on human chromosome 9. Studies of a similar murine protein indicate that these protein-receptor interactions have a pivotal role in T cell-mediated skin inflammation. [provided by RefSeq, Sep 2014].
References
Further reading
Cytokines | C-C motif chemokine ligand 27 | [
"Chemistry"
] | 239 | [
"Cytokines",
"Signal transduction"
] |
43,910,664 | https://en.wikipedia.org/wiki/Bunte%20salt | In organosulfur chemistry, a Bunte salt is an archaic name for salts with the formula RSSO3–Na+. They are also called S-alkylthiosulfates or S-arylthiosulfates. These compounds are typically derived from alkylation on the pendant sulfur of sodium thiosulfate:
RX + Na2S2O3 → Na[O3S2R] + NaX
They have been used as intermediates in the synthesis of thiols. They are also used to generate unsymmetrical disulfides:
Na[O3S2R] + NaSR' → RSSR' + Na2SO3
According to X-ray crystallography, they adopt the expected structure with tetrahedral sulfur(VI) atom, a sulfur-sulfur single bond, and three equivalent sulfur-oxygen bonds.
See also
Thiosulfonates are organosulfur compounds with the formula RSO2S− and RSO2SR'
References
Organosulfur compounds | Bunte salt | [
"Chemistry"
] | 216 | [
"Organic compounds",
"Organosulfur compounds",
"Organic chemistry stubs"
] |
43,910,689 | https://en.wikipedia.org/wiki/Thiosulfonate | Thiosulfonates are organosulfur compounds with the formula RSO2SR'. The parent member is a colorless liquid.
Thiosulfonate esters are usually produced by oxidation of disulfides or the nucleophilic attack of thiolates on organosulfonyl halides. The simplest thiosulfonate, can however be prepared from dimethyl sulfoxide by treatment with oxalyl chloride.
Alkali metal thiosulfonates are the conjugate base of thiosulfuric acid. They are prepared by the reaction of organosulfonyl chlorides with sources of sulfide.
Oxidation with mCPBA gives disulfones.
See also
Bunte salts are related organosulfur compounds with the formula RSSO3−
Thiosulfinate a structurally analogous functional group in a lower oxidation state, with the formula RSS(O)R
References
Organosulfur compounds | Thiosulfonate | [
"Chemistry"
] | 196 | [
"Organic compounds",
"Organosulfur compounds",
"Organic chemistry stubs"
] |
43,912,166 | https://en.wikipedia.org/wiki/Acrylonitrile%20styrene%20acrylate | Acrylonitrile styrene acrylate (ASA), also called acrylic styrene acrylonitrile, is an amorphous thermoplastic developed as an alternative to acrylonitrile butadiene styrene (ABS), that has improved weather resistance. It is an acrylate rubber-modified styrene acrylonitrile copolymer. It is used for general prototyping in 3D printing, where its UV resistance and mechanical properties make it an excellent material for use in fused filament fabrication printers, particularly for outdoor applications. ASA is also widely used in the automotive industry.
Properties
ASA is structurally very similar to ABS. The spherical particles of slightly crosslinked acrylate rubber (instead of butadiene rubber), functioning as an impact modifier, are chemically grafted with styrene-acrylonitrile copolymer chains, and embedded in styrene-acrylonitrile matrix. The acrylate rubber differs from the butadiene based rubber by absence of double bonds, which gives the material about ten times the weathering resistance and resistance to ultraviolet radiation of ABS, higher long-term heat resistance, and better chemical resistance. ASA is significantly more resistant to environmental stress cracking than ABS, especially to alcohols and many cleaning agents. n-Butyl acrylate rubber is usually used, but other esters can be encountered too, e.g. ethyl hexyl acrylate. ASA has lower glass transition temperature than ABS, 100 °C vs 105 °C, providing better low-temperature properties to the material.
ASA has high outdoor weatherability; it retains gloss, color, and mechanical properties in outdoor exposure. It has good chemical and heat resistance, high gloss, good antistatic properties, and is tough and rigid. It is used in applications requiring weatherability, e.g. commercial siding, outside parts of vehicles, or outdoor furniture.
ASA is compatible with some other plastics, namely polyvinyl chloride and polycarbonate. ASA-PVC compounds are in use.
ASA can be processed by extrusion and coextrusion, thermoforming, injection molding, extrusion blow molding, and structural foam molding.
ASA is mildly hygroscopic; drying may be necessary before processing.
ASA exhibits low moulding shrinkage.
ASA can be used as an additive to other polymers, when their heat distortion (resulting in deformed parts made of the material) has to be lowered.
ASA can be coextruded with other polymers, so only the ASA layer is exposed to high temperature or weathering. ASA foils are used in in-mold decoration for forming e.g. car exterior panels.
ASA can be welded to itself or to some other plastics. Ultrasonic welding can be used to join ASA to PVC, ABS, SAN, PMMA, and some others.
ASA can be solvent-welded, using e.g. cyclohexane, 1,2-dichloroethane, methylene chloride, or 2-butanone. Such solvents can also join ASA with ABS and SAN. Solutions of ASA in these solvents can also be used as adhesives.
ASA can be glued with cyanoacrylates; uncured resin can however cause stress cracking. ASA is compatible with acrylic-based adhesives. Anaerobic adhesives perform poorly with ASA. Epoxies and neoprene adhesives can be used for bonding ASA with woods and metals.
ASA waste can be combined with sand for pavement structures. The dynamic modulus results showed that the ASA mixtures have improved high-temperature deformation resistance as compared to the asphalt mixtures. The ASA mixtures have excellent rutting resistance and moisture damage resistance. The tensile strength ratio of the ASA and asphalt mixture are all larger than 0.8 and therefore satisfy the Superpave specification. The average coefficient of permeability of the ASA mixture is 6–10 times higher than the asphalt mixture in the same air void level. The average aggregate loss percent of the ASA mixtures is 9.2–10.8 times higher than asphalt mixtures. Overall, sand and ASA plastic mixtures were found to be an adequate substitute for asphalt mixtures typically used for road surfaces. ASA and sand can also be used in 3D printing and injection molding as a low-cost method of distributed recycling.
Compared to polycarbonate, ASA has higher resistance to environmental stress cracking, and exhibits lower yellowing in outdoor applications. Compared to polypropylene, ASA has lower moulding shrinkage (0.5% vs 1.5%), higher stiffness, impact resistance, heat distortion temperature, and weatherability.
History
In the 1960s, James A. Herbig and Ival O. Salyer of Monsanto were the first to attempt to make what would become ASA using butyl acrylate as the rubber phase. This work was then refined by Hans-Werner Otto and Hans Peter Siebel of BASF using a copolymer of butyl acrylate with butadiene for the rubber phase.
Production
ASA can be made by either a reaction process of all three monomers (styrene, acrylonitrile, acrylic ester) or a graft process, although the graft process is the typical method. A grafted acrylic ester elastomer is introduced during the copolymerization of styrene and acrylonitrile. The elastomer is introduced as a powder.
As of 2003, there were only few large manufacturers of ASA; e.g. BASF, General Electric, Bayer, Miele, Hitachi, and LG Chem. The production process is similar to ABS, but it has some key differences and difficulties. The annual demand around 2003 was about 1-5% of ABS.
Applications
ASA/PC (polycarbonate) blends have been prepared and are commercially available.
In the Fused Filament Fabrication 3-D printing process, the ASA filament is used to fabricate 3-D printed parts, which above all must absorb a certain amount of impact and impact energy without breaking. Substantial effort has been focused on 3D printing parameter optimization by many methods including with the Taguchi methods to enable ASA to be used for high-end applications.
ASA with compounds of silver, rendering its surface antimicrobial by the silver's oligodynamic effect, was introduced to the market in 2008.
3D printed ASA can be used for absorbers for water desalination.
References
Organic polymers
Fused filament fabrication | Acrylonitrile styrene acrylate | [
"Chemistry"
] | 1,378 | [
"Organic compounds",
"Organic polymers"
] |
43,913,289 | https://en.wikipedia.org/wiki/Galactosaminogalactan | Galactosaminogalactan (commonly abbreviated as GAG or GG), is an exopolysaccharide composed of galactose and N-acetylgalactosamine (GalNAc). It is commonly found in the biofilm and cell wall of various fungal species. Although the sugar residues are arranged in no particular/discrete order, and thus a heteroglycan, the residues are all linked by α-1,4 glycosidic bonds. Galactosaminogalactan is typically extracted by ethanol precipitation from liquid culture or by alkaline treatment from the cell wall. Once extracted, galactosaminogalactan becomes highly insoluble.
In Aspergillus fumigatus, a causative agent of aspergillosis, galactosaminogalactan is required for adherence to host tissue, to mask PAMPs like β-1,3-glucans and to mediate virulence in several animal models. While its role in pathogenesis is still being defined, galactosaminogalactan has been found in histological sections of lungs of patients with aspergillosis. Besides its role in fungal virulence, certain fractions of laboratory purified galactosaminogalactan has been shown to induce neutrophil apoptosis and reduce inflammation.
Synthesis
Similar to other fungal cell wall polysaccharides, galactosaminogalactan is synthesized by polymerization of nucleotide sugars. Although the actual glycosyltransferase responsible for polymerization has not been reported, the synthesis of precursor nucleotide sugars has been studied. The galactose component originates from UDP-galactose and the GalNAc component originates from UDP-N-acetylgalactosamine. These nucleotide sugars are not physiologically favored and must to be converted from UDP-glucose and Uridine diphosphate N-acetylglucosamine (GlcNAc), respectively. The UDP-glucose 4-epimerase Uge3 is responsible for these conversions.
References
Polysaccharides
Galactose | Galactosaminogalactan | [
"Chemistry"
] | 462 | [
"Carbohydrates",
"Polysaccharides"
] |
43,920,749 | https://en.wikipedia.org/wiki/International%20Organization%20for%20Biological%20Crystallization | The International Organization for Biological Crystallization (IOBCr) is a non-profit, scientific organization for scientists who study the crystallization of biological macromolecules and develop crystallographic methodologies for their study. It was founded in 2002 to create a permanent organ for the organization of the International Conferences for the crystallization of Biological Macromolecules (ICCBM). The ICCBM conferences are organized biannually with venues that change regularly to maintain an international character. The objective of the IOBCr is the exchange of research results and encourage practical applications of biological crystallization. It organizes and supports interdisciplinary workshops. The attendance at the ICCBM meetings includes bio-crystallographers, biochemists, physicists, and engineers. The last International Conferences on Crystallization of Biological Macromolecules ICCBM15 was held in Hamburg, Germany.
ICCBM meeting locations
ICCBM17 Shanghai, China (Organisers: Zhi-Jie Liu & Da-Chuan Yin), October 29 - November 2, 2018
ICCBM16 Prague, Czech Republic) (Organiser: I Kutá Smatanová), July 2–7, 2016
ICCBM15 Hamburg, Germany, September 17–20, 2014 (Organisers: C. Betzel & J.R. Mesters)
ICCBM14 Hunstville, Alabama, September 23–28, 2012 (Organisers: J. Ng & M. Pusey)
ICCBM13 Dublin, Ireland, September 12–16, 2010 (Organiser: M. Caffrey)
ICCBM-12 Cancun, Mexico, 6–9 May 2008 (Organiser: A. Moreno)
ICCBM-11 Quebec, Canada, 16–21 August 2006 (Organisers: S.-X. Lin)
ICCBM-10 Beijing, China, 5–8 June 2004 (Organiser: Z. Rao)
ICCBM-9 Jena, Germany, 23–28 March 2002 (Organiser: R. Hilgenfeld)
ICCBM-8 May 14–19, 2000 San Destin, Florida, USA (Organisers: L. DeLucas, A. Chernov)
ICCBM-7 Granada Spain 3–8 May 1998 (Organizer: J. Garcia-Ruiz)
ICCBM-6 Hiroshima, Japan 12–17 November 1995 (Organizers: T. Ashida, H. Komatsu)
ICCBM-5 San Diego, California, USA, 8–13 August 1993 (Organizers: E.A. Stura, J. Sowadski, E. Villafranca)
ICCBM-4 Freiburg, Germany 18–24 August 1991 (Organizers: J. Stezowski and W. Littke)
ICCBM-3 Washington DC USA, 13–19 August 1989 (Organiser: K Ward)
ICCBM-2 Bischenberg, Strasbourg, France, 19–25 July 1987 (Organisers: R. Giege, A. Ducruix, J. Fontecilla-Camps)
ICCBM-1 Stanford, California, USA, 14–16 August 1985 (Organiser: R. Feigelson)
ICCBM Proceedings
ICCBM-14 Crystal Growth & Design Volume vi, Issue 10, (September 2012)
ICCBM-13 Crystal Growth & Design Volume vi, Issue 7, (September 2012)
ICCBM-12 Crystal Growth & Design Volume 8, Issue 12, pp 4193–4193 (November 2008)
ICCBM-11 Crystal Growth & Design Volume 7, Issue 11 Pages 2123–2371 (November 2007)
ICCBM-10 Acta Crystallographica D Volume 61, Part 6 (June 2005)
ICCBM-9 Acta Crystallographica D Volume 58, Part 10 (October 2002)
ICCBM-8 Journal of Crystal Growth, Volume 232, Issues 1–4, Pages 1–647 (November 2001)
ICCBM-7 Journal of Crystal Growth, Volume 196, Issue 2–4, (January 1999)
ICCBM-6 Journal of Crystal Growth, Volume 168, Issues 1–4, Pages 1–328 (June 1996)
ICCBM-5 Acta Crystallographica D Volume 50, Part 4 (July 1994)
ICCBM-4 Journal of Crystal Growth, Volume 122, Issues 1–4, Pages 1–405 (August 1992)
ICCBM-3 Journal of Crystal Growth, Volume 110, Issue 1–2, Pages 1–338 (March 1991)
ICCBM-2 Journal of Crystal Growth, Volume 90, Issue 1–3, Pages 1–374 (May 1988)
ICCBM-1 Journal of Crystal Growth, Volume 76, Issue 3, Pages 529–715 (May 1986)
References
International scientific organizations
Crystallography organizations
International organizations based in the Czech Republic
2002 establishments in the Czech Republic | International Organization for Biological Crystallization | [
"Chemistry",
"Materials_science"
] | 980 | [
"Crystallography",
"Crystallography organizations"
] |
43,921,696 | https://en.wikipedia.org/wiki/Salvius%20%28robot%29 | Salvius () is an open source humanoid robot built in the United States in 2008, the first of its kind. Its name is derived from the word 'salvaged', being constructed with an emphasis on using recycled components and materials to reduce the costs of designing and construction. The robot is designed to be able to perform a wide range of tasks due to its humanoid body structure planning. The primary goal for the Salvius project is to create a robot that can function dynamically in a domestic environment.
Salvius is a part of the open source movement, meaning the robot's source code is freely available for others to use, alter, add and learn. Unlike other humanoid robots, Salvius benefits from the advantages of open source software allowing problems to be quickly addressed by a community of developers. Salvius has been used as a resource by STEM educators to enable students to learn about subjects in science and technology.
The name "Salvius" dates back to the time of the Roman Empire, however, it was chosen for this robot because of its similarity to the word "salvage". Names have been a significant part of this robot's development. Salvius is tattooed with the names of the individuals and businesses that have contributed to the project's progress.
Applications
Salvius is intended to be a resource for developers to experiment with machine learning and kinematic applications for humanoid robots. The robot is designed to allow new hardware features to be added or removed as needed using plug and play USB connections. Recent changes to the robots design have improved the robot's ability to connect to other devices so that developers can also investigate new ways that robots can interact with the Internet of Things (IoT).
Development
The robots construction has been documented since 2010. Its creation used recycling, and any commercially available parts used on the robot were chosen with availability and economic affordability in mind. Hardware items such as the Raspberry Pi and Arduino microcontrollers were selected for their open source design and their support communities. The robot uses multiple Arduino microcontrollers which were chosen based on the versatility and popularity of the platform across communities.
Software
The robot's computer runs Raspbian Linux and primarily uses open source software. Salvius is able to operate autonomously as well as controlled remotely using an online interface. The robot's programming languages include: Python, Arduino, and JavaScript. Python is the supported language of the Raspberry Pi. C is used for programming the Arduino micro-controllers that the robot's main computer, a Raspberry Pi, communicates with. By sending tasks off to other boards it allows the robot to do parallel processing and distribute work load. The [star network] topography prevents a failure in the Arduino procession nodes from damaging the robot.
Salvius's API allows users to send and retrieve data. Its wireless connection allows control through a web interface to view what the robot sees. Since all the software is installed on the robot the user only needs a device with a working internet connection and a browser.
Hardware
The robot is controlled by a network of Raspberry Pi and Arduino microcontrollers. The Raspberry Pi acts as a server for [high level programming languages] as its control function. The robot uses Grove motor controllers to control motors. Most of the robots motors have been salvaged from alternate sources and reused to construct the robot.
Sensors
Sensors allow the robot to successfully interact with its environment. Sensors that have been used on the robot include: touch, sound, light, ultrasonic, and a PIR (Passive infrared sensor). The robot also has an ethernet-connected IP camera which serves as its primary optical input device.
Specifications
See also
Humanoid robot
Open-source robotics
Actroid
Android
iCub
HRP-4C
REEM-B
QRIO
TOPIO
Nao
References
Robotics
Humanoid robots
Bipedal humanoid robots
Robots of the United States
Open-source robots
Androids
2008 robots | Salvius (robot) | [
"Engineering"
] | 810 | [
"Robotics",
"Automation"
] |
43,926,107 | https://en.wikipedia.org/wiki/John%20Stewart%20Bell%20Prize | The John Stewart Bell Prize for Research on Fundamental Issues in Quantum Mechanics and their Applications (short form: Bell Prize) was established in 2009, funded and managed by the University of Toronto, Centre for Quantum Information and Quantum Control (CQIQC). Named after John Stewart Bell (the physicist behind Bell's theorem, a theorem whose experimental vindication led to a Nobel Prize), it is awarded every odd-numbered year, for significant contributions relating to the foundations of quantum mechanics and to the applications of these principles – this covers, but is not limited to, quantum information theory, quantum computation, quantum foundations, quantum cryptography and quantum control. The selection committee has included Gilles Brassard, Peter Zoller, Alain Aspect, John Preskill, and Juan Ignacio Cirac Sasturain, in addition to previous winners Sandu Popescu, Michel Devoret and Nicolas Gisin.
Awarded Prizes
See also
List of physics awards
References
Physics awards
Quantum mechanics
University of Toronto | John Stewart Bell Prize | [
"Physics",
"Technology"
] | 199 | [
"Science and technology awards",
"Theoretical physics",
"Quantum mechanics",
"Physics awards"
] |
41,045,982 | https://en.wikipedia.org/wiki/Synergistic%20catalysis | Synergistic catalysis is a specialized approach to catalysis whereby at least two different catalysts act on two different substrates simultaneously to allow reaction between the two activated materials. While a catalyst works to lower the energy of reaction overall, a reaction using synergistic catalysts work together to increase the energy level of HOMO of one of the molecules and lower the LUMO of another. While this concept has come to be important in developing synthetic pathways, this strategy is commonly found in biological systems as well.
Background
Synergistic catalysts have been used for a variety of reactions, especially when both substrates require some kind of significant activation either with stoichiometric amounts of an activator or through a separate reaction beforehand. Synergistic catalysts differ from other multi-catalyst systems by the nature that one catalyst activates one substrate while the other activates a different substrate. There are other types of multi-catalyst systems such as double activation catalysts where two catalysts are required to activate one substrate or cascade catalysts where one catalyst first transforms a substrate which then is activated by a second catalyst to react.
While this field does show particular promise in affording molecules that could not be synthesized under normal synthetic strategies, there are a few issues that need to be addressed. One such issue is self quenching of the catalysts with each other. An example is if one of the catalysts is a Lewis acid and the other is a Lewis base, there is the possibility for formation of a Lewis acid base complex but this can be overcome by carefully choosing the pair.
Examples
In Biology
Synergistic catalysts are very common in biological systems. The reactions occur by a molecule binding to a protein as a substrate and becoming active and being reacted with a coenzyme such as NADPH which is essentially an activated hydride. A specific example of this is shown by the synthesis of tetrahydrofolate via the enzyme dihydrofolate reductase. Dihydrofolate reductase catalytically activates dihydrofolate by protonating the imine, while NADPH, essentially a hydride source activated by the cofactor NADP+, can then come in and add a hydride across the imine to afford the product.
Dual Transition Metals Catalysis
Through the combination of two transition metal catalysts, synergistic catalysis has been reported to accelerate many chemical transformations, and even to induce high enantioselectivity, which could not be realized by the use individual catalysts. Sawamura et al. reported an early example of enantioselective allylic alkylation of nitriles catalyzed by a mixture of rhodium and palladium complexes. The palladium catalyst with chiral ligands alone gave a high yield, but no enantioselectivity was observed. The reaction did not proceed at all using the rhodium catalyst alone. Using both together, however, gave both a high yield and enantioselectivity for the transformation.
They used trans-chelating chiral phosphine ligands (AnisTRAP) to generate chiral transition metal complexes. In their proposed mechanism schemes, an enolate is formed from an α-cyano ester and coordinates to the rhodium catalyst, while decarboxylative and oxidative addition of allyl carbonate to the palladium catalyst forms the π-allylpalladium (II) complex. Subsequently, the enolate attacks the π-allylpalladium (II) complex enantioselectively to afford the optically active product.
Enantio- and Diastereoselective Catalysis
Besides using two transition metal catalysts, synergistic catalysis can also be carried out by utilizing one transition metal catalyst in combination with an organocatalyst. Here the synergistic α-allylation of aldehydes was accomplished by utilizing a transition metal complex in combination with a chiral amine catalyst. In 2013, Carreira and co-workers reported a highly enantio- and diastereoselective α-allylation of branched aldehydes. They used chiral primary amines and iridium catalysts complexed with chiral ligands to afford the product with two newly formed stereocenters at the α and β position.
By matching the two chiral amines and enantiomers of the chiral ligands, they were able to access all four possible stereoisomers of the product with good yields. More importantly, their catalytic system exhibits simultaneous and almost absolute control over the stereochemical configurations of both stereocenters.
References
Catalysis | Synergistic catalysis | [
"Chemistry"
] | 966 | [
"Catalysis",
"Chemical kinetics"
] |
41,047,035 | https://en.wikipedia.org/wiki/Data%20in%20use | Data in use is an information technology term referring to active data which is stored in a non-persistent digital state typically in computer random-access memory (RAM), CPU caches, or CPU registers.
Scranton, PA data scientist Daniel Allen in 1996 proposed Data in use as a complement to the terms data in transit and data at rest which together define the three states of digital data.
Alternative definitions
Data in use refers to data in computer memory. Some cloud software as a service (SaaS) providers refer to data in use as any data currently being processed by applications, as the CPU and memory are utilized.
Concerns
Because of its nature, data in use is of increasing concern to businesses, government agencies and other institutions. Data in use, or memory, can contain sensitive data including digital certificates, encryption keys, intellectual property (software algorithms, design data), and personally identifiable information. Compromising data in use enables access to encrypted data at rest and data in motion. For example, someone with access to random access memory can parse that memory to locate the encryption key for data at rest. Once they have obtained that encryption key, they can decrypt encrypted data at rest.
Threats to data in use can come in the form of cold boot attacks, malicious hardware devices, rootkits and bootkits.
Full memory encryption
Encryption, which prevents data visibility in the event of its unauthorized access or theft, is commonly used to protect Data in Motion and Data at Rest and increasingly recognized as an optimal method for protecting Data in Use.
There have been multiple projects to encrypt memory. Microsoft Xbox systems are designed to provide memory encryption and the company PrivateCore presently has a commercial software product vCage to provide attestation along with full memory encryption for x86 servers. Several papers have been published highlighting the availability of security-enhanced x86 and ARM commodity processors. In that work, an ARM Cortex-A8 processor is used as the substrate on which a full memory encryption solution is built. Process segments (for example, stack, code or heap) can be encrypted individually or in composition. This work marks the first full memory encryption implementation on a mobile general-purpose commodity processor. The system provides both confidentiality and integrity protections of code and data which are encrypted everywhere outside the CPU boundary.
For x86 systems, AMD has a Secure Memory Encryption (SME) feature introduced in 2017 with Epyc. Intel has promised to deliver its Total Memory Encryption (TME) feature in an upcoming CPU.
CPU-based key storage
Operating system kernel patches such as TRESOR and Loop-Amnesia modify the operating system so that CPU registers can be used to store encryption keys and avoid holding encryption keys in RAM. While this approach is not general purpose and does not protect all data in use, it does protect against cold boot attacks. Encryption keys are held inside the CPU rather than in RAM so that data at rest encryption keys are protected against attacks that might compromise encryption keys in memory.
Enclaves
Enclaves enable an “enclave” to be secured with encryption in RAM so that enclave data is encrypted while in RAM but available as clear text inside the CPU and CPU cache. Intel Corporation has introduced the concept of “enclaves” as part of its Software Guard Extensions. Intel revealed an architecture combining software and CPU hardware in technical papers published in 2013.
Cryptographic protocols
Several cryptographic tools, including secure multi-party computation and homomorphic encryption, allow for the private computation of data on untrusted systems. Data in use could be operated upon while encrypted and never exposed to the system doing the processing.
See also
Also see Alternative Definition section of Data At Rest
Homomorphic encryption is a form of encryption that allows computation on ciphertexts.
Zero-knowledge proof is a method by which one party (the prover) can prove to another party (the verifier) that they know a value x, without conveying any information apart from the fact that they know the value x.
Secure multi-party computation is a method for parties to jointly compute a function over their inputs while keeping those inputs private.
Non-interactive zero-knowledge proof (NIZKs) are zero-knowledge proofs that require no interaction between the prover and verifier.
Format-preserving encryption (FPE), refers to encrypting in such a way that the output (the ciphertext) is in the same format as the input (the plaintext)
Blinding is a cryptography technique by which an agent can provide a service to a client in an encoded form without knowing either the real input or the real output.
Example privacy-enhancing technologies
References
Computer data
Cryptography | Data in use | [
"Mathematics",
"Technology",
"Engineering"
] | 959 | [
"Cybersecurity engineering",
"Cryptography",
"Applied mathematics",
"Computer data",
"Data"
] |
41,047,668 | https://en.wikipedia.org/wiki/Phenol%E2%80%93chloroform%20extraction | Phenol–chloroform extraction is a liquid-liquid extraction technique in molecular biology used to separate nucleic acids from proteins and lipids.
Process
Aqueous samples, lysed cells, or homogenised tissue are mixed with equal volumes of a phenol:chloroform mixture. This mixture is then centrifuged. Because the phenol:chloroform mixture is immiscible with water, the centrifuge will cause two distinct phases to form: an upper aqueous phase, and a lower organic phase. The aqueous phase rises to the top because it is less dense than the organic phase containing the phenol:chloroform. This difference in density is why phenol, which only has a slightly higher density than water, must be mixed with chloroform to form a mixture with a much higher density than water.
The hydrophobic lipids will partition into the lower organic phase, and the proteins will remain at the interphase between the two phases, while the nucleic acids (as well as other contaminants such as salts, sugars, etc.) remain in the upper aqueous phase. The upper aqueous phase can then be pipetted off. Care must be taken to avoid pipetting any of the organic phase or material at the interface. This procedure is often performed multiple times to increase the purity of the DNA. This procedure yields large double stranded DNA that can be used in PCR or RFLP.
If the mixture is acidic, DNA will precipitate into the organic phase while RNA remains in the aqueous phase. This is because DNA is more readily neutralized than RNA.
There are some disadvantages of this technique in forensic use. It is time-consuming and uses hazardous reagents. Also, because it is a two-step process involving transfer of reagents between tubes, it is at a greater risk of contamination.
See also
Acid guanidinium thiocyanate-phenol-chloroform extraction
Ethanol precipitation
Spin column-based nucleic acid purification
References
Molecular biology
Biochemistry methods | Phenol–chloroform extraction | [
"Chemistry",
"Biology"
] | 448 | [
"Biochemistry methods",
"Molecular biology stubs",
"Biochemistry",
"Molecular biology"
] |
41,050,914 | https://en.wikipedia.org/wiki/Gene%20therapy%20for%20epilepsy | Gene therapy is being studied for some forms of epilepsy. It relies on viral or non-viral vectors to deliver DNA or RNA to target brain areas where seizures arise, in order to prevent the development of epilepsy or to reduce the frequency and/or severity of seizures. Gene therapy has delivered promising results in early stage clinical trials for other neurological disorders such as Parkinson's disease, raising the hope that it will become a treatment for intractable epilepsy.
Overview
Epilepsy refers to a group of chronic neurological disorders that are characterized by seizures, affecting over 50 million people, or 0.4–1% of the global population. There is a basic understanding of the pathophysiology of epilepsy, especially of forms characterized by the onset of seizures from a specific area of the brain (partial-onset epilepsy). Although most patients respond to medication, approximately 20%–30% do not improve with or fail to tolerate antiepileptic drugs. For such patients, surgery to remove the epileptogenic zone can be offered in a small minority, but is not feasible if the seizures arise from brain areas that are essential for language, vision, movement or other functions. As a result, many people with epilepsy are left without any treatment options to consider, and thus there is a strong need for the development of innovative methods for treating epilepsy.
Through the use of viral vector gene transfer, with the purpose of delivering DNA or RNA to the epileptogenic zone, several neuropeptides, ion channels and neurotransmitter receptors have shown potential as transgenes for epilepsy treatment. Among vectors are adenovirus and adeno-associated virus vectors (AAV), which have the properties of high and efficient transduction, ease of production in high volumes, a wide range of hosts, and extended gene expression. Lentiviral vectors have also shown promise.
Clinical research
Among challenges to clinical translation of gene therapy are possible immune responses to the viral vectors and transgenes and the possibility of insertional mutagenesis, which can be detrimental to patient safety. Scaling up from the volume needed for animal trials to that needed for effective human transfection is an area of difficulty, although it has been overcome in other diseases. With its size of less than 20 nm, AAV in part addresses these problems, allowing for its passage through the extracellular space, leading to widespread transfection. Although lentivectors can integrate in the genome of the host this may not represent a risk for treatment of neurological diseases because adult neurons do not divide and so are less prone to insertional mutagenesis
Viral approaches in preclinical development
In finding a method for treating epilepsy, the pathophysiology of epilepsy is considered. As the seizures that characterize epilepsy typically result from excessive and synchronous discharges of excitatory neurons, the logical goal for gene therapy treatment is to reduce excitation or enhance inhibition. Out of the viral approaches, neuropeptide transgenes being researched are somatostatin, galanin, and neuropeptide Y (NPY). However, adenosine and gamma-aminobutyric acid (GABA) and GABA receptors are gaining more momentum as well. Other transgenes being studied are potassium channels and tools for on-demand suppression of excitability (optogenetics and chemogenetics).
Adenosine
Adenosine is an inhibitory nucleoside that doubles up as a neuromodulator, aiding in the modulation of brain function. It has anti-inflammatory properties, in addition to neuroprotective and anti-epileptic properties. The most prevalent theory is that upon brain injury there is an increased expression of the adenosine kinase (ADK). The increase in adenosine kinase results in an increased metabolic rate for adenosine nucleosides. Due to the decrease in these nucleosides that possess anti-epileptic properties and the overexpression of the ADK, seizures are triggered, potentially resulting in the development of epileptogenesis. Studies have shown that ADK overexpression results from astrogliosis following a brain injury, which can lead to the development of epileptogenesis. While ADK overexpression leads to increased susceptibility to seizures, the effects can be counteracted and moderated by adenosine. Based on the properties afforded by adenosine in preventing seizures, in addition to its FDA approval in the treatment of other ailments such as tachycardia and chronic pain, adenosine is an ideal target for the development of anti-epileptic gene therapies.
Galanin
Galanin, found primarily within the central nervous system (limbic system, piriform cortex, and amygdala), plays a role in the reduction of long term potentiation (LTP), regulating consumption habits, as well as inhibiting seizure activity. Introduced back in the 1990s by Mazarati et al., galanin has been shown to have neuroprotective and inhibitory properties. Through the use of mice that are deficient in GalR1 receptors, a picrotoxin-kindled model was utilized to show that galanin plays a role in modulating and preventing hilar cell loss as well as decreasing the duration of induced seizures. Conducted studies confirm these findings of preventing hilar hair cell loss, decreasing the number and duration of induced seizures, increasing the stimulation threshold required to induce seizures, and suppressing the release of glutamate that would increase susceptibility to seizure activity. Galanin expression can be utilized to significantly moderate and reduce seizure activity and limit seizure cell death.
Neuropeptide Y
Neuropeptide Y (NPY), which is found in the autonomic nervous system, helps modulate the hypothalamus, and therefore, consumption habits. Experiments have been conducted to determine the effect of NPY on animal models before and after induced seizures. To evaluate the effect prior to seizures, one study inserted vectors 8 weeks prior to kindling, showing an increase in seizure threshold. In order to evaluate the effects after epileptogenesis was present, the vectors were injected into the hippocampus of rats after seizures were induced. This resulted in a reduction of seizure activity. These studies established that NPY increased the seizure threshold in rats, arrested disease progression, and reduced seizure duration. After examining the effects of NPY on behavioral and physiological responses, it was discovered that it had no effect on LTP, learning, or memory. A protocol for NPY gene transfer is being reviewed by the FDA.
Somatostatin
Somatostatin is a neuropeptide and neuromodulator that plays a role in the regulation of hormones as well as aids in sleep and motor activity. It is primarily found in interneurons that modulates the firing rates of pyramidal cells primarily at a local level. They feed-forward inhibit pyramidal cells. In a series of studies where somatostatin was expressed in a rodent kindling model, it was concluded that somatostatin resulted in a decreased average duration for seizures, increasing its potential as an anti-seizure drug. The theory in utilizing somatostatin is that if pyramidal cells are eliminated, then the feed forward, otherwise known as inhibition, is lost. Somatostatin containing interneurons carry the neurotransmitter GABA, which primarily hyperpolarizes the cells, which is where the feed forward theory is derived from. The hope of gene therapy is that by overexpressing somatostatin in specific cells, and increasing the GABAergic tone, it is possible to restore balance between inhibition and excitation.
Potassium channels
Kv1.1 is a voltage-gated potassium channel encoded by the KCNA1 gene. It is widely expressed in the brain and peripheral nerves, and plays a role in controlling the excitability of neurons and the amount of neurotransmitter released from axon terminals. Successful gene therapy using lentiviral delivery of KCNA1 has been reported in a rodent model of focal motor cortex epilepsy. The treatment was well tolerated, with no detectable effect on sensorimotor coordination. Gene therapy with a modified potassium channel delivered using either a non-integrating lentivector that avoids the risk of insertional mutagenesis or an AAV has also been shown to be effective in other models of epilepsy.
Optogenetics
A potential obstacle to clinical translation of gene therapy is that viral vector-mediated manipulation of the genetic make-up of neurons is irreversible. An alternative approach is to use tools for on-demand suppression of neuronal and circuit excitability. The first such approach was to use optogenetics. Several laboratories have shown that the inhibitory light-sensitive protein Halorhodopsin can suppress seizure-like discharges in vitro as well as epileptic activity in vivo. A draw-back of optogenetics is that light needs to be delivered to the area of the brain expressing the opsin. This can be achieved with laser-coupled fiber-optics or light-emitting diodes, but these are invasive.
Chemogenetics
An alternative approach for on-demand control of circuit excitability that does not require light delivery to the brain is to use chemogenetics. This relies on expressing a mutated receptor in the seizure focus, which does not respond to endogenous neurotransmitters but can be activated by an exogenous drug. G-protein coupled receptors mutated in this way are called Designer Receptors Exclusively Activated by Designer Drugs (DREADDs). Success in treating epilepsy has been reported using the inhibitory DREADD hM4D(Gi), which is derived from the M4 muscarinic receptor. AAV-mediated expression of hM4D(Gi) in a rodent model of focal epilepsy on its own had no effect, but when activated by the drug clozapine N-oxide it suppressed seizures. The treatment had no detectable side effects and is, in principle, suited for clinical translation. Olanzapine has been identified as a full and potent activator of hM4D(Gi). A 'closed-loop' variant of chemogenetics to stop seizures, which avoids the need for an exogenous ligand, relies on a glutamate-gated chloride channel which inhibits neurons whenever the extracellular concentration of the excitatory neurotransmitter glutamate rises.
CRISPR
A mouse model of Dravet syndrome has been treated using a variant of CRISPR that relies on a guide RNA and a dead Cas9 (dCas9) protein to recruit transcriptional activators to the promoter region of the sodium channel gene Scn1a in interneurons.
Non-viral approaches
Magnetofection is done through the use of super paramagnetic iron oxide nanoparticles coated with polyethylenimine. Iron oxide nanoparticles are ideal for biomedical applications in the body due to their biodegradable, cationic, non-toxic, and FDA-approved nature. Under gene transfer conditions, the receptors of interest are coated with the nanoparticles. The receptors will then home in and travel to the target of interest. Once the particle docks, the DNA is delivered to the cell via pinocytosis or endocytosis. Upon delivery, the temperature is increased ever so slightly, lysing the iron oxide nanoparticle and releasing the DNA. Overall, the technique is useful for combatting slow vector accumulation and low vector concentration at target areas. The technique is also customizable to the physical and biochemical properties of the receptors by modifying the characteristics of the iron oxide nanoparticles.
Future implications
The use of gene therapy in treating neurological disorders such as epilepsy has presented itself as an increasingly viable area of ongoing research with the primary targets being somatostatin, galanin, neuropeptide y, potassium channels, optogenetics and chemogenetics for epilepsy. As the field of gene therapy continues to grow and show promising results for the treatment of epilepsy among other diseases, additional research needs to be done in ensuring patient safety, developing alternative methods for DNA delivery, and finding feasible methods for scaling up delivery volumes.
References
Epilepsy
Gene therapy | Gene therapy for epilepsy | [
"Engineering",
"Biology"
] | 2,606 | [
"Gene therapy",
"Genetic engineering"
] |
52,635,018 | https://en.wikipedia.org/wiki/UVS%20%28Juno%29 | UVS, known as the Ultraviolet Spectrograph or Ultraviolet Imaging Spectrometer is the name of an instrument on the Juno orbiter for Jupiter. The instrument is an imaging spectrometer that observes the ultraviolet range of light wavelengths, which is shorter wavelengths than visible light but longer than X-rays. Specifically, it is focused on making remote observations of the aurora, detecting the emissions of gases such as hydrogen in the far-ultraviolet. UVS will observes light from as short a wavelength as 70 nm up to 200 nm, which is in the extreme and far ultraviolet range of light. The source of aurora emissions of Jupiter is one of the goals of the instrument. UVS is one of many instruments on Juno, but it is in particular designed to operate in conjunction with JADE, which observes high-energy particles. With both instruments operating together, both the UV emissions and high-energy particles at the same place and time can be synthesized. This supports the Goal of determining the source of the Jovian magnetic field. There has been a problem understanding the Jovian aurora, ever since Chandra determined X-rays were coming not from, as it was thought Io's orbit but from the polar regions. Every 45 minutes an X-ray hot-spot pulsates, corroborated by a similar previous detection in radio emissions by Galileo and Cassini spacecraft. One theory is that its related to the solar wind. The mystery is not that there are X-rays coming Jupiter, which has been known for decades, as detected by previous X-ray observatories, but rather why with the Chandra observation, that pulse was coming from the north polar region.
There is two main parts to UVS, the optical section and an electronics box. It has a small reflecting telescope and also a scan mirror, and it can do long-slit spectrography. UVS uses a Rowland circle spectrograph and a toroidal holographical grating. The detector uses a micro-channel plate detector with the sensor being a CsI photocathode to detect the UV light
UVS was launched aboard the Juno spacecraft on August 5, 2011 (UTC) from Cape Canaveral, USA, as part of the New Frontiers program, and after an interplanetary journey that including a swingby of Earth, entered a polar orbit of Jupiter on July 5, 2016 (UTC),
For detection of following gasses in the far UV:
Hydrogen (H)
Molecular hydrogen (H2)
Methane ()
Acetylene (C2H2)
UVS is similar to ultraviolet spectrometers flown on New Horizons (Pluto probe), Rosetta (comet probe) and the Lunar Reconnaissance Orbiter. One of the changes is shielding to help the instrument endure Jupiter's radiation environment.
The electronics are located inside the Juno Radiation Vault, which uses titanium to protect it and other spacecraft electronics. The UVS electronics include two power supplies and data processing. UVS electronics box uses an Actel 8051 microcontroller.
UVS was developed at the Space Science Department at Southwest Research Institute
UVIS data in concert with JEDI observations detected electrical potentials of 400,000 electron volts (400 keV), 20-30 times higher than Earth, driving charged particles into the polar regions of Jupiter.
There was a proposal to use Juno's UVS (and JIRAM) in collaboration with the Hubble Space Telescope instruments STIS and ACS to study Jupiter aurora in UV.
Observations
UVS has been used to observe aurora of Jupiter. Since UVS is on the Juno spacecraft as it orbits Jupiter, it has been able to observe both the day and night side aurora, from distance ranging from seven, to 0.05 Jupiter radii. One results as that some auroral emissions are related to the local magnetic time.
Observations with the instrument have suggested that a different mechanism that understood to create Earth aurora's may be occurring.
UVS observed Jupiter's Moon Io, along with several other instruments. The moon's polar regions were observed, and there was evidence of a volcanic plume.
See also
Imaging spectrometer
Ultraviolet astronomy
Jovian Auroral Distributions Experiment
Cosmic Origins Spectrograph
Microwave Radiometer (Juno)
Europa Ultraviolet Spectrograph
Gravity Science
MAVEN (also has ultraviolet instrument, used at planet Mars)
Ralph (New Horizons) (Visible and near infrared imaging spectrometer on New Horizons)
Alice (spacecraft instrument) (UV imaging spectrometer on New Horizons and Rosetta space probes)
References
External links
NASA Juno Spacecraft and Instruments
In-flight characterization and calibration of the Juno-Ultraviolet Spectrograph (Juno-UVS)
Northern lights of Jupiter by UVS
Spacecraft instruments
Juno (spacecraft)
Spectrographs
Ultraviolet telescopes
Extreme ultraviolet telescopes | UVS (Juno) | [
"Physics",
"Chemistry"
] | 965 | [
"Spectrographs",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
52,637,818 | https://en.wikipedia.org/wiki/Transient%20grating%20spectroscopy |
Overview
Transient grating spectroscopy is an optical technique used to measure quasiparticle propagation. It can track changes in metallic materials as they are irradiated.
It is a pump-probe method in which short-lived standing waves are generated upon a sample surface. This is performed by combining two simultaneous pump laser beams with an angle (theta) between them, which creates an interference pattern on the sample, similar to the interference pattern generated by the well-known double slit experiment. The space between the regions of constructive interference is given by the following equation:
where Λ is the distance between the interference stripes, λ, the wavelength of the pump pulse, and θ, the angle between the two incident overlapping beams. The regions of the sample in the constructive interference fringes become thermally/vibrationally excited, and in combination with the unexcited fringes, creates a standing wave of wavelength Λ, also known as a surface acoustic wave. The surface acoustic waves act as transient absorption or reflection gratings that can be probed with a continuous laser that is pulsed immediately after the pump beams. The probe beam is either diffracted through or reflected from the surface, depending on the nature of the sample, toward a detector. The surface acoustic wave fluctuations modulate the diffraction or reflection of the probe beam at the surface of the sample. Its intensity gets monitored by the detector as a function of time. The intensity of the diffracted or reflected probe beam will converge at a baseline level, where no surface acoustic wave is interfering with the diffraction or reflection of the probe.
References
Time-resolved spectroscopy | Transient grating spectroscopy | [
"Physics",
"Chemistry",
"Astronomy"
] | 329 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Astronomy stubs",
"Time-resolved spectroscopy",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
52,642,074 | https://en.wikipedia.org/wiki/Basis%20expansion%20time-frequency%20analysis | Linear expansions in a single basis, whether it is a Fourier series, wavelet, or any other basis, are not suitable enough. A Fourier basis provided a poor representation of functions well localized in time, and wavelet bases are not well adapted to represent functions whose Fourier transforms have a narrow high frequency support. In both cases, it is difficult to detect and identify the signal patterns from their expansion coefficients, because the information is diluted across the whole basis. Therefore, we must use large amounts of Fourier basis or Wavelets to represent whole signal with small approximation error. Some matching pursuit algorithms are proposed in reference papers to minimize approximation error when given the amount of basis.
Properties
For Fourier series
Some time-frequency analysis are also attempt to represent signal as the form below
when given the amount of basis M, minimize approximation error in mean-square sense
Examples
Three-parameter atoms
Since are not orthogonal, should be determined by a matching pursuit process.
Three parameters:
controls the central time.
controls the central frequency.
controls the scaling factor.
Four-parameter atoms (chirplet)
Four parameters:
controls the central time
controls the central frequency
controls the scaling factor
controls the chirp rate
Short-time Fourier transform of different basis
References
S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process., vol. 41, no. 12, pp. 3397–3415, Dec. 1993.
A. Bultan, “A four-parameter atomic decomposition of chirplets,” IEEE Trans. Signal Process., vol. 47, no. 3, pp. 731–745, Mar. 1999.
C. Capus, and K. Brown. "Short-time fractional Fourier methods for the time-frequency representation of chirp signals," J. Acoust. Soc. Am. vol. 113, issue 6, pp. 3253–3263, 2003.
Jian-Jiun Ding, Time frequency analysis and wavelet transform class note, Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2016
Time–frequency analysis | Basis expansion time-frequency analysis | [
"Physics"
] | 438 | [
"Frequency-domain analysis",
"Spectrum (physical sciences)",
"Time–frequency analysis"
] |
51,121,181 | https://en.wikipedia.org/wiki/Topological%20Galois%20theory | In mathematics, topological Galois theory is a mathematical theory which originated from a topological proof of Abel's impossibility theorem found by Vladimir Arnold and concerns the applications of some topological concepts to some problems in the field of Galois theory. It connects many ideas from algebra to ideas in topology. As described in Askold Khovanskii's book: "According to this theory, the way the Riemann surface of an analytic function covers the plane of complex numbers can obstruct the representability of this function by explicit formulas. The strongest known results on the unexpressibility of functions by explicit formulas have been obtained in this way."
References
Galois theory
Topology | Topological Galois theory | [
"Physics",
"Mathematics"
] | 140 | [
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
36,853,181 | https://en.wikipedia.org/wiki/Hydration%20energy | In chemistry, hydration energy (also hydration enthalpy) is the amount of energy released when one mole of ions undergoes solvation. Hydration energy is one component in the quantitative analysis of solvation. It is a particular special case of water. The value of hydration energies is one of the most challenging aspects of structural prediction. Upon dissolving a salt in water, the cations and anions interact with the positive and negative dipoles of the water. The trade-off of these interactions vs those within the crystalline solid comprises the hydration energy.
Examples
If the hydration energy is greater than the lattice energy, then the enthalpy of solution is negative (heat is released), otherwise it is positive (heat is absorbed).
The hydration energy should not be confused with solvation energy, which is the change in Gibb's free energy (not enthalpy) as solute in the gaseous state is dissolved. If the solvation energy is positive, then the solvation process is endergonic; otherwise, it is exergonic.
For instance, water warms when treated with CaCl2 (anhydrous calcium chloride) as a consequence of the large heat of hydration. However, the hexahydrate, CaCl2·6H2O cools the water upon dissolution. The latter happens because the hydration energy does not completely overcome the lattice energy, and the remainder has to be taken from the water in order to compensate the energy loss.
The hydration energies of the gaseous Li+, Na+, and Cs+ are respectively 520, 405, and 265 kJ/mol.
See also
Enthalpy of solution
Heat of dilution
Hydrate
Hydrational fluid
Ionization energy
References
Physical chemistry | Hydration energy | [
"Physics",
"Chemistry"
] | 367 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
36,854,652 | https://en.wikipedia.org/wiki/Eulerian%20matroid | In matroid theory, an Eulerian matroid is a matroid whose elements can be partitioned into a collection of disjoint circuits.
Examples
In a uniform matroid , the circuits are the sets of exactly elements. Therefore, a uniform matroid is Eulerian if and only if is a divisor of . For instance, the -point lines are Eulerian if and only if is divisible by three.
The Fano plane has two kinds of circuits: sets of three collinear points, and sets of four points that do not contain any line. The three-point circuits are the complements of the four-point circuits, so it is possible to partition the seven points of the plane into two circuits, one of each kind. Thus, the Fano plane is also Eulerian.
Relation to Eulerian graphs
Eulerian matroids were defined by as a generalization of the Eulerian graphs, graphs in which every vertex has even degree. By Veblen's theorem the edges of every such graph may be partitioned into simple cycles, from which it follows that the graphic matroids of Eulerian graphs are examples of Eulerian matroids.
The definition of an Eulerian graph above allows graphs that are disconnected, so not every such graph has an Euler tour. observes that the graphs that have Euler tours can be characterized in an alternative way that generalizes to matroids: a graph has an Euler tour if and only if it can be formed from some other graph , and a cycle in , by contracting the edges of that do not belong to . In the contracted graph, generally stops being a simple cycle and becomes instead an Euler tour. Analogously, Wilde considers the matroids that can be formed from a larger matroid by contracting the elements that do not belong to some particular circuit. He shows that this property is trivial for general matroids (it implies only that each element belongs to at least one circuit) but can be used to characterize the Eulerian matroids among the binary matroids, matroids representable over GF(2):
a binary matroid is Eulerian if and only if it is the contraction of another binary matroid onto a circuit.
Duality with bipartite matroids
For planar graphs, the properties of being Eulerian and bipartite are dual: a planar graph is Eulerian if and only if its dual graph is bipartite. As Welsh showed, this duality extends to binary matroids: a binary matroid is Eulerian if and only if its dual matroid is a bipartite matroid, a matroid in which every circuit has even cardinality.
For matroids that are not binary, the duality between Eulerian and bipartite matroids may break down. For instance, the uniform matroid is Eulerian but its dual is not bipartite, as its circuits have size five. The self-dual uniform matroid is bipartite but not Eulerian.
Alternative characterizations
Because of the correspondence between Eulerian and bipartite matroids among the binary matroids, the binary matroids that are Eulerian may be characterized in alternative ways. The characterization of is one example; two more are that a binary matroid is Eulerian if and only if every element belongs to an odd number of circuits, if and only if the whole matroid has an odd number of partitions into circuits. collect several additional characterizations of Eulerian binary matroids, from which they derive a polynomial time algorithm for testing whether a binary matroid is Eulerian.
Computational complexity
Any algorithm that tests whether a given matroid is Eulerian, given access to the matroid via an independence oracle, must perform an exponential number of oracle queries, and therefore cannot take polynomial time. In particular, it is difficult to distinguish a uniform matroid on a set of elements, with all cycles of size , from a paving matroid that differs from the uniform matroid in having two complementary cycles of size . The paving matroid is Eulerian but the uniform matroid is not. Any oracle algorithm, applied to the uniform matroid, must make queries, an exponential number, to verify that the input is not instead an instance of the paving matroid.
Eulerian extension
If is a binary matroid that is not Eulerian, then it has a unique Eulerian extension, a binary matroid whose elements are the elements of together with one additional element , such that the restriction of to the elements of is isomorphic to . The dual of is a bipartite matroid formed from the dual of by adding to every odd circuit.
References
Matroid theory | Eulerian matroid | [
"Mathematics"
] | 973 | [
"Matroid theory",
"Combinatorics"
] |
39,716,485 | https://en.wikipedia.org/wiki/Dynamical%20heterogeneity | Dynamical heterogeneity describes the behavior of glass-forming materials when undergoing a phase transition from the liquid state to the glassy state. In dynamical heterogeneity, the dynamics of cooling to a glassy state show variation within the material.
Polymers
Polymer properties include viscoelasticity and may be synthetic or natural. When a polymeric liquid is cooled below its freezing temperature without crystallizing, it becomes a supercooled liquid. When the supercooled liquid is further cooled, it becomes a glass.
The temperature at which a polymer becomes a glass by fast cooling is called the glass transition temperature Tg. At this temperature, viscosity reaches up to 1013 poise depending upon cooling-rate.
Phase transitions
It is possible for a phase transition from polymer to glassy state to take place. Polymer glass transitions have many determinants including relaxation time, viscosity and cage size. At low temperatures the dynamics become very slow (sluggish) and relaxation time increases from picoseconds to seconds, minutes, or more. At high temperatures, the correlation function has a ballistic regime for very short times (when particles do not interact) and a microscopic regime. In the microscopic regime, the correlation functions decay exponentially at high temperatures. At low temperatures the correlation functions have an intermediate regime in which particles have both slow and fast relaxations. The slow relaxation is an indication of cages in the glassy system. In glassy state density is not homogeneous i.e. particles are localized in different density distributions in space. It means that density fluctuations are present in the system. Particle dynamics become very slow because temperature is directly proportional to kinetic energy causing the particles trapped in local regions by each other. Particles are doing rattling motion inside these cages and cooperate with each other. These regions in the glassy polymer are called cages. In the intermediate regime each particle has its own and different relaxation time.
The dynamics in all these cases are different, so at a small scale, there are a large number of cages in the system relative to the size of the whole system. This is known as dynamical heterogeneity in the glassy state of the system. A measurement of dynamical heterogeneity can be done by calculating correlation functions like Non-Gaussian parameter, four point correlation functions (Dynamic Susceptibility) and three time correlation functions.
References
Further reading
Materials science | Dynamical heterogeneity | [
"Physics",
"Materials_science",
"Engineering"
] | 495 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
39,719,118 | https://en.wikipedia.org/wiki/Argillipedoturbation | Argillipedoturbation, sometimes referred to as self-mulching, is a process of soil mixing caused by the shrinking and swelling of the smectite clays contained in soil. It is an effect specific to soils of the vertisolic variety, and is triggered by the constant cycles of wetting and drying It is characterized by wide (up to ), deep ( or more) vertical cracks in the solum that contain differing materials from the rest of the soil layer they are found in, as well as sloughed-in surface materials. In order for argillipedoturbation to occur, the soil must be at least 30% clay content. The expression of argillipedoturbation depends to a large degree on the exact clay content of the soil, as well as on what other minerals make up the soil composition.
Argillipedoturbation can be strong enough that it can affect the soil horizons by combining the different horizons, making them difficult to distinguish. It can also result in a gently-rolling surface referred to as gilgai topography and in the dramatic soil inclusions known as slickensides. In addition, argillipedoturbation sometimes results in a chernozemic-like A-type horizon, or one resembling a gleysolic order soil. This process can also affect the distribution of rock fragments, by moving fragments at the surface to lower soil layers and vice versa.
The effects of this process are useful in agriculture, as the organic surface materials fertilize the soil and cause them to become very productive when irrigated. However, they are very difficult to plow and manage due to the high, thoroughly mixed clay content.
References
Soil mechanics | Argillipedoturbation | [
"Physics"
] | 351 | [
"Soil mechanics",
"Applied and interdisciplinary physics"
] |
39,720,117 | https://en.wikipedia.org/wiki/Collision-induced%20absorption%20and%20emission | In spectroscopy, collision-induced absorption and emission refers to spectral features generated by inelastic collisions of molecules in a gas. Such inelastic collisions (along with the absorption or emission of photons) may induce quantum transitions in the molecules, or the molecules may form transient supramolecular complexes with spectral features different from the underlying molecules. Collision-induced absorption and emission is particularly important in dense gases, such as hydrogen and helium clouds found in astronomical systems.
Collision-induced absorption and emission is distinguished from collisional broadening in spectroscopy in that collisional broadening comes from elastic collisions of molecules, whereas collision-induced absorption and emission is an inherently inelastic process.
Collision-induced spectra of gases
Ordinary spectroscopy is concerned with the spectra of single atoms or molecules. Here we outline the very different spectra of complexes consisting of two or more interacting atoms or molecules: the "interaction-induced" or "collision-induced" spectroscopy. Both ordinary and collision-induced spectra may be observed in emission and absorption and require an electric or magnetic multipole moment - in most cases an electric dipole moment - to exist for an optical transition to take place from an initial to a final quantum state of a molecule or a molecular complex. (For brevity of expression we will use here the term "molecule" interchangeably for atoms as well as molecules). A complex of interacting molecules may consist of two or more molecules in a collisional encounter, or else of a weakly bound van der Waals molecule. On first sight, it may seem strange to treat optical transitions of a collisional complex, which may exist just momentarily, for the duration of a fly-by encounter (roughly 10−13 seconds), in much the same way as this was long done for molecules in ordinary spectroscopy. But even transient complexes of molecules may be viewed as a new, "supermolecular" system which is subject to the same spectroscopic rules as ordinary molecules. Ordinary molecules may be viewed as complexes of atoms that have new and possibly quite different spectroscopic properties than the individual atoms the molecule consists of, when the atoms are not bound together as a molecule (or are not "interacting"). Similarly, complexes of interacting molecules may (and usually do) acquire new optical properties, which often are absent in the non-interacting, well separated individual molecules.
Collision-induced absorption (CIA) and emission (CIE) spectra are well known in the microwave and infrared regions of the electromagnetic spectrum, but they occur in special cases also in the visible and near ultraviolet regions. Collision-induced spectra have been observed in nearly all dense gases, and also in many liquids and solids. CIA and CIE are due to the intermolecular interactions, which generate electric dipole moments. We note that an analogous collision-induced light scattering
(CILS) or Raman process also exists, which is well studied and is in many ways completely analogous to CIA and CIE. CILS arises from interaction-induced polarizability increments of molecular complexes; the excess polarizability of a complex, relative the sum of polarizabilities of the noninteracting molecules.
Interaction-induced dipoles
Molecules interact at close range through intermolecular forces (the "van der Waals
forces"), which cause minute shifts of the electron density distributions (relative
the distributions of electrons when the molecules are not interacting).
Intermolecular forces are repulsive at near range, where electron exchange
forces dominate the interaction, and attractive at somewhat greater separations,
where the dispersion forces are active. (If separations are further increased, all
intermolecular forces fall off rapidly and may be totally neglected.)
Repulsion and attraction are due, respectively, to the small defects or
excesses of electron densities of molecular complexes in the space
between the interacting molecules, which often result in interaction-induced
electric dipole moments that contribute some to interaction-induced emission and
absorption intensities. The resulting dipoles are referred to as exchange
force-induced dipole and dispersion force-induced dipoles, respectively.
Other dipole induction mechanisms also exist in molecular (as opposed to
monatomic) gases and in mixtures of gases, when molecular gases are present.
Molecules have centers of positive charge (the nuclei), which are surrounded by
a cloud of electrons. Molecules thus may be thought of being surrounded by various
electric multipolar fields which will polarize any collisional partner
momentarily in a fly-by encounter, generating the so-called multipole-induced
dipoles. In diatomic molecules such as H2 and N2, the lowest-order
multipole moment is the quadrupole, followed by a hexadecapole, etc., hence the
quadrupole-induced, hexadecapole-induced,... dipoles. Especially the former is often
the strongest, most significant of the induced dipoles contributing to CIA and CIE.
Other induced dipole mechanisms exist. In collisional systems involving
molecules of three or more atoms (CO2, CH4...), collisional frame
distortion may be an important induction mechanism. Collision-induced
emission and absorption by simultaneous collisions of three or more particles
generally do involve pairwise-additive dipole components, as well as important
irreducible dipole contributions and their spectra.
Historical sketch
Collision-induced absorption was first reported in compressed oxygen gas in 1949
by Harry Welsch and associates at frequencies of the fundamental band of the
O2 molecule. (Note that an unperturbed O2 molecule, like all other diatomic homonuclear molecules, is infrared inactive on account of the inversion symmetry and does thus not possess a "dipole allowed" rotovibrational spectrum at any frequency).
Collision-induced spectra
Molecular fly-by collisions take little time, something like 10−13 s.
Optical transition of collisional complexes of molecules generate spectral
"lines" that are very broad - roughly five orders of magnitude broader
than the most familiar "ordinary" spectral lines (Heisenberg's uncertainty
relation). The resulting spectral "lines" usually strongly
overlap so that collision-induced spectral bands typically appear as continua
(as opposed to the bands of often discernible lines of ordinary molecules).
Collision-induced spectra appear at the frequencies of the rotovibrational and
electronic transition bands of the unperturbed molecules, and also at sums and
differences of such transition frequencies: simultaneous transitions in two (or
more) interacting molecules are well known to generate optical transitions of
molecular complexes.
Virial expansions of spectral intensities
Intensities of spectra of individual atoms or molecules typically vary linearly
with the numerical gas density. However, if gas densities are sufficiently
increased, quite generally contributions may also be observed that vary as density
squared, cubed... These are the collision-induced spectra of two-body (and
quite possibly three-body,...) collisional complexes. The collision-induced spectra
have sometimes been separated from the continua of individual atoms and
molecules, based on the characteristic density dependences. In other words, a
virial expansion in terms of powers of the numerical gas density is often
observable, just as this is widely known for the virial expansion of the equation
of state of compressed gases. The first term of the expansion, which is linear
in density, represents the ideal gas (or "ordinary'') spectra where these
exist. (This first term vanishes for the infrared inactive gases,) And
the quadratic, cubic,... terms of the virial expansions arise from optical
transitions of binary, ternary,... intermolecular complexes, which are
(often unjustifyably) neglected in the ideal gas approximation of spectroscopy.
Spectra of van der Waals molecules
Two kinds of complexes of molecules exist: the collisional complexes discussed
above, which are short lived. Besides, bound (i.e. relatively stable) complexes
of two or more molecules exist, the so-called van der Waals molecules. These
exist usually for much longer times than the collisional complexes and, under
carefully chosen experimental conditions (low temperature, moderate gas density),
their rotovibrational band spectra show "sharp" (or resolvable) lines (Heisenberg
uncertainty principle), much like ordinary molecules. If the parent molecules
are nonpolar, the same induced dipole mechanisms, which are discussed above,
are responsible for the observable spectra of van der Waals molecules.
Figure 1 (to be included)
An example of CIA spectra
Figure 1 shows an example of a collision-induced absorption spectra
of H2-He complexes at a variety of temperatures. The spectra were computed
from the fundamental theory, using quantum chemical methods, and were shown to
be in close agreement with laboratory measurements at temperatures, where such
measurements exist (for temperatures around 300 K and lower).
The intensity scale of the figure is highly compressed. At the
lowest temperature (300 K), a series of six striking maxima is seen, with deep minima
between them. The broad maxima roughly coincide with the H2 vibrational bands.
With increasing temperature, the minima become less striking and disappear at
the highest temperature (curve at the top, for the temperature of 9000 K).
A similar picture is to be expected for the CIA spectra of pure hydrogen gas
(i.e. without admixed gases) and, in fact for the CIA spectra of many
other gases. The main difference, say if nitrogen CIA spectra are considered
instead of those of hydrogen gas, would be a much closer spacing, if not a total
overlapping, of the diverse CIA bands which appear roughly at the frequencies
of the vibrational bands of the N2 molecule.
Significance
The significance of CIA for astrophysics was recognized early-on, especially where dense atmospheres of mixtures of molecular hydrogen and helium gas exist.
Planets
Herzberg pointed out direct evidence of H2 molecules in the atmospheres of the outer planets. The atmospheres of the inner planets and of Saturn's big moon Titan also show significant CIA in the infrared due to concentrations of nitrogen, oxygen, carbon dioxide and other molecular gases. However, the total CIA contribution of Earth's major gases, N2 and O2, to the atmosphere's natural greenhouse effect is relatively minor except near the poles. Extrasolar planets have been discovered with hot atmospheres (a thousand kelvin or more) which otherwise resemble Jupiter's atmosphere (mixtures of mostly H2 and He) where relatively strong CIA exists.
Cool white dwarf stars
Stars that burn hydrogen are called main sequence (MS) stars - these are by far the most common objects in the night sky. When the hydrogen fuel is exhausted and temperatures begin to fall, the object undergoes various transformations
and a white dwarf star is eventually born, the ember of the expired MS star. Temperatures of a new-born white dwarf may be in the hundreds of thousand kelvin, but if the mass of the white dwarf is less than just a few solar masses, burning of 4He to 12C and 16O is not possible and the star will slowly cool down forever. The coolest white dwarfs observed have temperatures of roughly 4000 K, which must mean that the universe is not old enough so that lower temperature stars cannot be found. The emission spectra of "cool" white dwarfs does not at all look like a Planck blackbody spectrum. Instead, nearly the whole infrared is attenuated or missing altogether from the star's emission, owing to CIA in the hydrogen-helium atmospheres surrounding their cores. The impact of CIA on the observed spectral energy distribution is well understood and accurately modeled for most cool white dwarfs. For white dwarfs with a mix H/He atmosphere, the intensity of the H2-He CIA can be used to infer the hydrogen abundance at the white dwarf photosphere. However, predicting CIA in the atmospheres of the coolest white dwarfs is more challenging, in part because of the formation of many-body collisional complexes.
Other cool stars
The atmospheres of low metallicity cool stars are composed primarily of hydrogen and helium. Collision-induced absorption by H2-H2 and H2-He transient complexes will be a more or less important opacity source of their atmospheres. For example, CIA in the H2 fundamental band, which falls on top of an opacity window between
H2O/CH4 or H2O/CO (depending on the temperature), plays an important role in shaping brown dwarf spectra. Higher gravity brown dwarf stars often show even stronger CIA, owing to the density squared dependence of CIA intensities, when other "ordinary" opacity sources are linearly dependent on density. CIA is also important in low-metallicity brown dwarfs, since "low metallicity" means reduced CNO (and other) elemental abundances compared to H2 and He, and thus stronger CIA compared to H2O, CO, and CH4 absorption. CIA absorption of H2-X collisional complexes is thus an important diagnostic of high-gravity and low-metallicity brown dwarfs. All of this is also true of the M dwarfs, but to a lesser extent. M dwarf atmospheres are hotter so that some increased
portion of the H2 molecules is in the dissociated state, which
weakens CIA by H2--X complexes. The significance of CIA for cool
astronomical objects was long suspected or known to some degree.
First stars
Attempts to model the formation of the "first" star from the pure hydrogen and helium gas clouds below about 10,000 K show that the heat generated in the gravitational contraction phase must be somehow radiatively released for further cooling to be possible. This is no problem as long as temperatures are still high enough so that free electrons exist: electrons are efficient emitters when interacting with neutrals (bremsstrahlung). However, at the lower temperatures in neutral gases, the recombination of hydrogen atoms to
H2 molecules is a process that generates enormous amounts of heat that must somehow be radiated away in CIE processes; if CIE were non-existing, molecule formation could not take place and temperatures could not fall further. Only CIE processes permit further cooling, so that molecular hydrogen will accumulate. A dense, cool environment will thus develop so that a gravitational collapse and star formation can actually proceed.
Database
Because of the great importance of many types of CIA spectra in planetary and astrophysical research, a well known spectroscopy database (HITRAN) has been expanded to include a number of CIA spectra in various frequency bands and for a variety of temperatures.
References
Spectroscopy
Astrophysics | Collision-induced absorption and emission | [
"Physics",
"Chemistry",
"Astronomy"
] | 2,988 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Astrophysics",
"Spectroscopy",
"Astronomical sub-disciplines"
] |
46,230,112 | https://en.wikipedia.org/wiki/Cauchy%E2%80%93Kovalevskaya%20theorem | In mathematics, the Cauchy–Kovalevskaya theorem (also written as the Cauchy–Kowalevski theorem) is the main local existence and uniqueness theorem for analytic partial differential equations associated with Cauchy initial value problems. A special case was proven by , and the full result by .
First order Cauchy–Kovalevskaya theorem
This theorem is about the existence of solutions to a system of m differential equations in n dimensions when the coefficients are analytic functions. The theorem and its proof are valid for analytic functions of either real or complex variables.
Let K denote either the fields of real or complex numbers, and let V = Km and W = Kn. Let A1, ..., An−1 be analytic functions defined on some neighbourhood of (0, 0) in W × V and taking values in the m × m matrices, and let b be an analytic function with values in V defined on the same neighbourhood. Then there is a neighbourhood of 0 in W on which the quasilinear Cauchy problem
with initial condition
on the hypersurface
has a unique analytic solution ƒ : W → V near 0.
Lewy's example shows that the theorem is not more generally valid for all smooth functions.
The theorem can also be stated in abstract (real or complex) vector spaces. Let V and W be finite-dimensional real or complex vector spaces, with n = dim W. Let A1, ..., An−1 be analytic functions with values in End (V) and b an analytic function with values in V, defined on some neighbourhood of (0, 0) in W × V. In this case, the same result holds.
Proof by analytic majorization
Both sides of the partial differential equation can be expanded as formal power series and give recurrence relations for the coefficients of the formal power series for f that uniquely determine the coefficients. The Taylor series coefficients of the Ai's and b are majorized in matrix and vector norm by a simple scalar rational analytic function. The corresponding scalar Cauchy problem involving this function instead of the Ai's and b has an explicit local analytic solution. The absolute values of its coefficients majorize the norms of those of the original problem; so the formal power series solution must converge
where the scalar solution converges.
Higher-order Cauchy–Kovalevskaya theorem
If F and fj are analytic functions near 0, then the non-linear Cauchy problem
with initial conditions
has a unique analytic solution near 0.
This follows from the first order problem by considering the derivatives of h appearing on the right hand side as components of a vector-valued function.
Example
The heat equation
with the condition
has a unique formal power series solution (expanded around (0, 0)). However this formal power series does not converge for any non-zero values of t, so there are no analytic solutions in a neighborhood of the origin. This shows that the condition |α| + j ≤ k above cannot be dropped. (This example is due to Kowalevski.)
Cauchy–Kovalevskaya–Kashiwara theorem
There is a wide generalization of the Cauchy–Kovalevskaya theorem for systems of linear partial differential equations with analytic coefficients, the Cauchy–Kovalevskaya–Kashiwara theorem, due to
. This theorem involves a cohomological formulation, presented in the language of D-modules. The existence condition involves a compatibility condition among the non homogeneous parts of each equation and the vanishing of a derived functor .
Example
Let . Set . The system has a solution if and only if the compatibility conditions are verified. In order to have a unique solution we must include an initial condition , where .
References
Reprinted in Oeuvres completes, 1 serie, Tome VII, pages 17–58.
(linear case)
(German spelling of her surname used at that time.)
External links
PlanetMath
Augustin-Louis Cauchy
Partial differential equations
Theorems in analysis
Uniqueness theorems | Cauchy–Kovalevskaya theorem | [
"Mathematics"
] | 819 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Mathematical problems",
"Uniqueness theorems"
] |
47,070,537 | https://en.wikipedia.org/wiki/K-convex%20function | K-convex functions, first introduced by Scarf, are a special weakening of the concept of convex function which is crucial in the proof of the optimality of the policy in inventory control theory. The policy is characterized by two numbers and , , such that when the inventory level falls below level , an order is issued for a quantity that brings the inventory up to level , and nothing is ordered otherwise. Gallego and Sethi have generalized the concept of K-convexity to higher dimensional Euclidean spaces.
Definition
Two equivalent definitions are as follows:
Definition 1 (The original definition)
Let K be a non-negative real number. A function is K-convex if
for any and .
Definition 2 (Definition with geometric interpretation)
A function is K-convex if
for all , where .
This definition admits a simple geometric interpretation related to the concept of visibility. Let . A point is said to be visible from if all intermediate points lie below the line segment joining these two points. Then the geometric characterization of K-convexity can be obtain as:
A function is K-convex if and only if is visible from for all .
Proof of Equivalence
It is sufficient to prove that the above definitions can be transformed to each other. This can be seen by using the transformation
Properties
Property 1
If is K-convex, then it is L-convex for any . In particular, if is convex, then it is also K-convex for any .
Property 2
If is K-convex and is L-convex, then for is -convex.
Property 3
If is K-convex and is a random variable such that for all , then is also K-convex.
Property 4
If is K-convex, restriction of on any convex set is K-convex.
Property 5
If is a continuous K-convex function and as , then there exit scalars and with such that
, for all ;
, for all ;
is a decreasing function on ;
for all with .
References
Further reading
Convex analysis
Types of functions | K-convex function | [
"Mathematics"
] | 397 | [
"Functions and mappings",
"Types of functions",
"Mathematical objects",
"Mathematical relations"
] |
47,074,767 | https://en.wikipedia.org/wiki/Lifshitz%20theory%20of%20van%20der%20Waals%20force | In condensed matter physics and physical chemistry, the Lifshitz theory of van der Waals forces, sometimes called the macroscopic theory of van der Waals forces, is a method proposed by Evgeny Mikhailovich Lifshitz in 1954 for treating van der Waals forces between bodies which does not assume pairwise additivity of the individual intermolecular forces; that is to say, the theory takes into account the influence of neighboring molecules on the interaction between every pair of molecules located in the two bodies, rather than treating each pair independently.
Need for a non-pairwise additive theory
The van der Waals force between two molecules, in this context, is the sum of the attractive or repulsive forces between them; these forces are primarily electrostatic in nature, and in their simplest form might consist of a force between two charges, two dipoles, or between a charge and a dipole. Thus, the strength of the force may often depend on the net charge, electric dipole moment, or the electric polarizability () (see for example London force) of the molecules, with highly polarizable molecules contributing to stronger forces, and so on.
The total force between two bodies, each consisting of many molecules in the van der Waals theory is simply the sum of the intermolecular van der Waals forces, where pairwise additivity is assumed. That is to say, the forces are summed as though each pair of molecules interacts completely independently of their surroundings (See Van der Waals forces between Macroscopic Objects for an example of such a treatment). This assumption is usually correct for gasses, but presents a problem for many condensed materials, as it is known that the molecular interactions may depend strongly on their environment and neighbors. For example, in a conductor, a point-like charge might be screened by the electrons in the conductance band, and the polarizability of a condensed material may be vastly different from that of an individual molecule. In order to correctly predict the van der Waals forces of condensed materials, a theory that takes into account their total electrostatic response is needed.
General principle
The problem of pairwise additivity is completely avoided in the Lifshitz theory, where the molecular structure is ignored and the bodies are treated as continuous media. The forces between the bodies are now derived in terms of their bulk properties, such as dielectric constant and refractive index, which already contain all the necessary information from the original molecular structure.
The original Lifshitz 1955 paper proposed this method relying on quantum field theory principles, and is, in essence, a generalization of the Casimir effect, from two parallel, flat, ideally conducting surfaces, to two surfaces of any material. Later papers by Langbein, Ninham, Parsegian and Van Kampen showed that the essential equations could be derived using much simpler theoretical techniques, an example of which is presented here.
Hamaker constant
The Lifshitz theory can be expressed as an effective Hamaker constant in the van der Waals theory.
Consider, for example, the interaction between an ion of charge , and a nonpolar molecule with polarizability at distance . In a medium with dielectric constant , the interaction energy between a charge and an electric dipole is given by
with the dipole moment of the polarizable molecule given by , where is the strength of the electric field at distance from the ion. According to Coulomb's law:
so we may write the interaction energy as
Consider now, how the interaction energy will change if the right hand molecule is replaced with a medium of density of such molecules. According to the "classical" van der Waals theory, the total force will simply be the summation over individual molecules. Integrating over the volume of the medium (see the third figure), we might expect the total interaction energy with the charge to be
But this result cannot be correct, since It is well known that a charge in a medium of dielectric constant at a distance from the plane surface of a second medium of dielectric constant experiences a force as if there were an 'image' charge of strength at distance D on the other side of the boundary. The force between the real and image charges must then be
and the energy, therefore
Equating the two expressions for the energy, we define a new effective polarizability that must obey
Similarly, replacing the real charge with a medium of density and polarizability gives an expression for . Using these two relations, we may restate our theory in terms of an effective Hamaker constant. Specifically, using McLachlan's generalized theory of VDW forces the Hamaker constant for an interaction potential of the form between two bodies at temperature is
with , where and are Boltzmann's and Planck's constants correspondingly. Inserting our relations for and approximating the sum as an integral , the effective Hamaker constant in the Lifshitz theory may be approximated as
We note that are real functions, and are related to measurable properties of the medium; thus, the Hamaker constant in the Lifshitz theory can be expressed in terms of observable properties of the physical system.
Experimental validation
The macroscopic theory of van der Waals theory has many experimental validations. Among which, some of the most notable ones are Derjaguin (1960); Derjaguin, Abrikosova and Lifshitz (1956) and Israelachvili and Tabor (1973), who measured the balance of forces between macroscopic bodies of glass, or glass and mica; Haydon and Taylor (1968), who measured the forces across bilayers by measuring their contact angle; and lastly Shih and Parsegian (1975), who investigated van der Waals potentials between heavy alkali-metal atoms and gold surfaces using atomic-beam-deflection.
References
Physical chemistry
Condensed matter physics | Lifshitz theory of van der Waals force | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,216 | [
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"nan",
"Physical chemistry",
"Matter"
] |
47,075,289 | https://en.wikipedia.org/wiki/Fecal%20sludge%20management | Fecal sludge management (FSM) (or faecal sludge management in British English) is the storage, collection, transport, treatment and safe end use or disposal of fecal sludge. Together, the collection, transport, treatment and end use of fecal sludge constitute the "value chain" or "service chain" of fecal sludge management. Fecal sludge is defined very broadly as what accumulates in onsite sanitation systems (e.g. pit latrines, septic tanks and container-based solutions) and specifically is not transported through a sewer. It is composed of human excreta, but also anything else that may go into an onsite containment technology, such as flushwater, cleansing materials (e.g. toilet paper and anal cleansing materials), menstrual hygiene products, grey water (i.e. bathing or kitchen water, including fats, oils and grease), and solid waste. Fecal sludge that is removed from septic tanks is called septage.
It is estimated that one-third of the world's population is served by onsite sanitation, and that in low-income countries less than 10% of urban areas are served by sewers. In low-income countries, the majority of fecal sludge is discharged untreated into the urban environment, placing a huge burden on public and environmental health. Hence, FSM plays a critical role in safely managed sanitation and the protection of public health. FSM services are provided by a range of formal and informal private sector services providers, local governments, water authorities, and public utilities. This can also result in unreliable services with relatively high costs at the household level.
Although new technology now allows for fecal sludge to be treated onsite (see Mobile Treatment Units below) the majority of fecal sludge is collected and either disposed of into the environment or treated offsite. Fecal sludge collection can be arranged on a scheduled basis or on a call-for-service basis (also known as on-demand, on-request, or non-scheduled services). The collected fecal sludge may be manually or mechanically emptied, and then transported to treatment plants with a vacuum truck, a tank and pump mounted on a flatbed truck, a small tank pulled by a motorcycle, or in containers on a handcart. The wider use of multiple decentralized sludge treatment facilities within cities (to avoid long haulage distances) is currently being researched and piloted.
Fecal sludge is different to wastewater and cannot simply be co-treated at sewage treatment plants. Small additions of fecal sludge are possible if plants are underutilized and able to take the additional load, and facilities to separate liquids and solids are available. A variety of mechanized and non-mechanized processing technologies may be used, including settling tanks, planted and unplanted drying beds, and waste stabilization ponds. The treatment process can produce resource recovery end-products such as treated effluent that can be used for irrigation, co-composting as a soil conditioner, anaerobic digestion for the production of biogas, forms of dry-combustion fuel such as pellets or biochar, charcoal, biodiesel, sludge and plants or protein production as animal fodder.
Definitions
Fecal sludge management refers to the storage, collection, transport, treatment, and safe end use or disposal of fecal sludge. Collectively, the collection, transport, treatment and end use or reuse of excreta constitute the "value chain" of fecal sludge management.
Fecal sludge
Fecal sludge is defined very broadly as what accumulates in onsite sanitation technologies and specifically is not transported through a sewer. It is composed of human excreta, but also anything else that may go into an onsite containment technology, such as flushwater, cleansing materials and menstrual hygiene products, grey water (i.e. bathing or kitchen water, including fats, oils and grease), and solid waste. Hence, fecal sludge is highly variable, with a very wide range of quantities (i.e. produced and accumulated volumes) and qualities (i.e. characteristics). Fecal sludge is stored onsite, and is periodically collected and transported to a fecal sludge treatment plant, followed by safe disposal or end use. When safely managed, fecal sludge that is collected from pit latrines can also be called "pit latrine sludge", whereas fecal sludge collected from septic tanks can also be called "septic tank sludge" or "septage".
Septage
Septage or "septic tank sludge" is fecal sludge that is accumulated and stored in a septic tank. Septage tends to be more dilute, as septic tanks are typically used with flush toilets (blackwater) and can also include grey water. Septic tanks also tend to have less solid waste, as they only receive things that can be flushed down a toilet (e.g. toilet paper). When operating as designed, a sludge blanket layer accumulates on the bottom of the tank, a scum layer that contains fats, oil and grease accumulates at the top, and the effluent or supernatant contains less solids.
Septage is periodically removed (with a frequency depending on tank capacity, system efficiency, and usage level, but typically less often than annually) from the septic tanks by specialized vehicles known as vacuum trucks. They pump the septage out of the tank, and transport it to a local fecal sludge treatment plant. It can also be used by farmers for fertilizer, or stored in large septage waste storage facilities for later treatment or use on crops.
The term "septage" has been used in the United States since at least 1992. It has also been used in projects by the United States Agency for International Development in Asia. Another definition of septage is: "A historical term to define sludge removed from septic tanks."
In India some government policy documents are using the term FSSM for "Fecal sludge and septage management".
Purposes and benefits
The overall goal of FSM is the protection of public and environmental health. FSM forms a key component of city-wide inclusive sanitation (CWIS), which considers all types of sanitation technologies in order to provide equitable, safe, and sustainable sanitation for everyone. CWIS employs a service delivery approach along the entire service chain, rather than just infrastructure provision.
Adequately and safely managed fecal sludge has the following benefits:
Reduce the potential for human contact with fecal-borne pathogens by improving the functioning of onsite sanitation systems;
Minimize odors and nuisances, and the uncontrolled discharge of organic matter from overflowing tanks or pits;
Reduce indiscriminate disposal of collected fecal sludge;
Production and sale of the end-products of the sludge treatment process. These products may include recycled water for agriculture and industry, soil conditioners from composting or co-composting materials, and energy products such as biogas, biodiesel, charcoal pellets, industrial powdered fuel, or electricity.
Stimulate economic development, and job creation and livelihood opportunities, while addressing the issues of the social stigma and operator health and safety that continue to impact informal workers. This can also include jobs for contractors and equipment installers; for sanitation workers such as sludge collection personnel including drivers and emptiers; and for treatment and reuse systems operators.
Developments in the sector
Since the wider recognition of the importance of sanitation, marked by the UN declaring 2008 as the 'Year of Sanitation', there has been a steady increase in commitment, uptake, implementation, and knowledge generation in non-sewered sanitation. The incorporation of the entire sanitation management service chain in the Sustainable Development Goal (SDG) 6, as opposed to just providing access to toilets, has further established acknowledge of the importance of FSM. The SDGs were launched in 2015, and SDG 6 is for "clean water and sanitation for all by 2030"), launched in 2015, has further established acknowledgement of its importance. There has also been an increase in the incorporation of fecal sludge management in national regulations and development agency agendas, increased funding from foundations and governments, and implementation of infrastructure and service provision.
There has been a rapid increase in evidence-based research and journal publications on the topic (e.g. for Africa and Asia). There are rapidly evolving technology developments along the entire service chain. Some have the potential to alter the existing service chain, such as container-based sanitation, decentralized options, and innovations developed through the Bill & Melinda Gates Foundation 'Reinvent the Toilet Challenge' since at least 2012.
Curriculums have been, and are continuing to be, developed and implemented. Initiatives include the Global Sanitation Graduate School, and freely available online courses, such as the Sandec MOOC series.
Challenges
In many LMICs, fecal sludge is still not properly managed. This may be due to a lack of mandated institutions and low awareness of the impact of poor sanitation; a lack of technical expertise and experience; an inability to source funds for to purchase of vacuum trucks and treatment, as well as a lack of knowledge necessary to initiate and implement successful FSM programs. Another factor is that the transporting fecal sludge has a real cost to vacuum truck operators and there is thus an incentive to dispose of the untreated waste into the environment (primarily into waterways, but also directly onto the land.) Failure to properly manage fecal sludge can result in the poor performance of onsite sanitation facilities (OSSFs), fecal sludge overflowing from containments, and the unsafe emptying and dumping of untreated fecal sludge into the environment.
Fecal sludge contains pathogens, can generate odors and cause surface water pollution, as well as groundwater pollution.
Components
Fecal sludge management (FSM) requires safe and hygienic septic tank and pit latrine emptying services, along with the effective treatment of solids and liquids and the reuse of treated produce where possible. It may include a range of options including on-site and offsite treatment, and the dispersal or capture and further processing of the products of the treatment process into such as biogas, compost and energy.
By type of dwellings
Cities
FSM is a critical sanitation service in cities and towns in all countries where households use onsite sanitation systems. Citywide FSM programs may utilize multiple or one treatment facility, use stationary and mobile transfer stations, and engage with micro, small and medium-sized enterprises that may conduct some or all of the services. Programs may be phased in over time to accommodate growing demand.
Peri urban areas
Peri urban areas are often less densely populated than urban centers. Therefore, they have more space and on-site sanitation systems can be effective for solid and liquid treatment. In most such peri-urban areas, it is less likely that they will be connected to a conventional centralized sewerage system in the short or medium term. Therefore, these areas will rely on a mix of onsite-sanitation systems and services, decentralized wastewater management systems, or by condominial or simplified sewerage connected to decentralized or centralized treatment. In all of these situations, FSM is a necessary service to keep the sanitation systems functioning properly.
Rural areas
Rural areas with low population density may not need formal FSM services if the local practice is to cover and rebuild latrines when they fill up. However, if this is not possible, rural areas often lack treatment facilities within a reasonable (say 30 minutes drive) distance; are difficult for tankers to access and often have limited demand for emptying making transport and treatment uneconomic, and unaffordable for most people. Therefore, options such as relocating latrines on-site, double (alternating) pit or Arborloo toilets could be considered. Also sharing decentralized FSM services and sludge treatment between nearby villages, or direct safe removal burial of waste could be considered and organized.
Alternatives to fecal sludge producing systems
Most types of dry toilets (except for pit latrines) do not generate fecal sludge but generate instead dried feces (in the case of urine-diverting dry toilets) or compost (in the case of composting toilets). For example, in the case of Arborloo toilets, nothing is ever extracted from the pit and, instead, the lightweight outhouse superstructure is moved to another shallow hole and a tree is planted on top of the filled hole.
Management aspects
Selecting the operator of FSM services
FSM services are usually provided by formal and informal private sector service providers, local governments, water authorities and utilities. Water utilities with a high percentage of water connectivity (homes with piped water connections) are logical operators of FSM programs. If water is sold to customers through a tariff, an additional tariff to cover FSM services may be added. For larger cities, it is usually the water and sewerage service provider that will be the most appropriate operator.
Local governments may choose to provide services by using their own staff and resources for collection, transportation and treatment. This is often the case in smaller cities or municipalities where the water utility may not have a broad reach. In many cases, cooperation between the city government and the water utility may be strategically advantageous. Dumaguete City, Philippines, is one example where the Water District (utility) and Local Government have joint ownership and responsibilities for the FSM program. Organized larger scale FSM programs may be able to provide the service more cheaply and more hygienically than the independent private operators working on an ad hoc basis. Ensuring services are affordable is an important selling point when promoting the program to citizens and encouraging them to participate.
The local private sector is an important player in providing FSM services. In such cases, private sector contractors may work directly for households (under regulation) or bid on desludging contracts let by the city. The private sector can also provide services in operating and maintaining the treatment works, and in processing and selling the commodities resulting from the treatment process. San Fernando City, La Union, Philippines is an example of a local government that has contracted out the treatment facility construction and collection program to the private sector.
Scheduled desludging programs
Scheduled desludging is a planned effort by the local government or utility to ensure regular desludging of septic tanks. In this process, every property is covered along a defined route and the property occupiers are informed in advance about desludging that will take place. The actual desludging (or emptying of septic tanks) can be done through a public private partnership (PPP) arrangement.
In Southeast Asia, there is (in 2016) increasing interest in scheduled desludging programs as a means of providing services. A WSP study recommended that efforts to introduce scheduled emptying should focus first on areas where demand was greatest, moving on to other areas when the success of scheduled emptying had been demonstrated in these areas. Analysis of pit and tank desludging records for Palu in Indonesia revealed that existing demand for desludging services varied between sub-districts, with demand being greatest in well-established areas and least in urban fringe areas.
There are multiple benefits of scheduled desludging services in the Indian context: Achieves the norms through regular desludging, reduces high prices of desludging, removes the need for manual labor, improves environmental and public health impacts, links with local taxes rather than with user charges. Scheduled desludging has been initiated in several Asian counties including the Philippines, Malaysia, Vietnam, Indonesia, and India. A program by SNV (Netherlands Development Organisation) has developed scheduled emptying services in Indonesia, Nepal and Bangladesh as part of a broader urban sanitation program during 2014–2017.
Elements of successful programs
FSM services can be provided as demand based (often called on-request, on-call, on-demand, ad-hoc or non-scheduled) or scheduled (also known as regular) desludging, or a combination of both. Under either mechanism, OSSFs are desludged on a periodic basis or when the household requests it or due to inspection by a competent authority indicates desludging is needed.
An analysis of 20 FSM Innovation Case Studies and research and advocacy of successful programs carried out by Oxfam Philippines has demonstrated that common elements for successful FSM programs include:
Well formulated and practical policy, rules and regulation: While these are essential they are almost useless, even counterproductive, on their own, and must be supported by complementary factors such as those below;
Local leadership and clearly mandated and resourced institutions to manage services, even where actual services are delivered by the private sector;
Partnerships between stakeholders contributes to developing services at scale, building community confidence and achieving sustainability;
A sustained program of community engagement, marketing and awareness raising is as essential to FSM as sludge treatment – but is frequently under-valued, under-budgeted and sometimes abandoned after an initial period;
Capacity-building for FSM service providers helps ensure that they can effectively meet all segments of demand and achieve long-term viability. This may include training in both technical matters and business management, and the facilitation of capital formation through grants, equipment leasing, loan guarantees and other financial instruments;
Tariffs that are pro-poor and representative of operational costs for providing the service;
Technology that is appropriate to the capacity to operate and maintain the system and the realities of the value chain.
Sanitation workers
Sanitation workers are the people responsible for cleaning, maintaining, operating, or emptying a sanitation technology at any step of the sanitation chain. These workers contribute to safe fecal sludge management.
Transport options
Collection vehicles and equipment
If the fecal sludge is liquid enough, it is usually collected by using vacuum pumps or centrifugal style booster pumps. A variety of manual and motorized devices designed to excavate thick and viscous sludge and accumulated trash are also available in the market.
After sitting for years in septic tanks and pit latrines, the accumulated sludge becomes hardened and is very difficult to remove. It is still common that workers enter pits in order to desludge them, even though this practice is generally unsafe and undesirable (in India, this practice is called "manual scavenging"). A number of low-cost pumping systems exist to remove this hardened sludge hygienically from the ground surface, although many of them are still in the experimental stage (e.g. Excravator, Gulper, e-Vac).
Fecal sludge can also be treated inside the tank or pit as well, by use of the "in-pit lime stabilization process", which treats the waste before it is removed from the tank or pit. Once removed, it is transported to onsite or off site treatment and processing facilities.
Some advanced transfer stations and vacuum trucks can dewater fecal sludge to some extent, and this water may be placed in sewer lines to be treated in wastewater treatment plants. This allows more sludge to be dealt with more efficiently and may constitute one of the best cases of co-treatment of fecal sludge in wastewater treatment plants.
Transfer stations
Transfer stations are intermediary drop off locations often used where treatment facilities are located too far away from population centers to make direct disposal feasible. In other locations, traffic concerns or local truck bans during daylight hours may make transfer stations feasible. In addition, municipalities where a significant percentage of homes cannot be accessed by tanker truck should utilize transfer stations. Transfer stations are used if:
More than 5% of the homes are inaccessible by a vacuum truck;
The treatment plant is too far away from the homes for transport in one haul to be practical;
Trucks are not permitted on the streets during the day; or
Heavy traffic during daylight hours impedes the movement of vacuum trucks.
Mobile transfer stations
Mobile transfer stations are nothing more than larger tanker trucks or trailers that are deployed along with small vacuum trucks and motorcycle or hand carts. The smaller vehicles discharge to the larger tanker, which then carries the collected sludge to the treatment plant. These work well in scheduled desludging business models.
Fixed transfer stations
Fixed transfer stations are dedicated facilities installed strategically throughout the municipality that serve as drop off locations for collected fecal sludge. They may include a receiving station with screens, a tank for holding the collected waste, trash storage containers, and wash down facilities. These may be more appropriate for FSM programs using the "call-for-service" business model.
While static transfer stations are fixed tanks, mobile transfer stations are simply tanker trucks or trailers that work alongside the SVVs and actually do the longer haul transferring of the waste from the community to the treatment plant. Mobile transfer stations work best for scheduled desludging programs where there are no traffic restrictions or truck bans, and a relatively large number of homes that are inaccessible to the larger vehicles.
Treatment processes
Characteristics of fecal sludge
Characteristics of fecal sludge may vary widely due to climate, toilet type, diet and other variables. Fecal sludge can be grouped by consistency as "liquid" (total solids or TS <5%), "slurry" (TS 5–15%), "semi-solid" (TS 15–25%), and "solid" (TS >25%). Quantities and qualities of fecal sludge and wastewater are very different, with the range of fecal sludge characteristics being 1–2 orders of magnitude higher than wastewater.
The result of the demographic, environmental, and technical factors that influence characteristics of fecal sludge is a high level of heterogeneity that complicates characterization.
In the absence of actual data, designers often use default values, such as 2,000 mg/L for BOD and 5,000 mg/L of TSS in order to size the treatment system. However, this often results in over-design or under-design of fecal sludge treatment plants. This is because there is often no "standard range of variation" for particular properties, and findings from one study cannot necessarily be used as a base of comparison to another.
Research has shown that correlations to spatially available data can help predict quantities and qualities of fecal sludge. The relevant indicators for the prediction include income level, users, volume, emptying frequency, and truck size. Using these correlations in characteristics could provide a way to reduce analytical costs for fecal sludge analysis.
Performing a waste characterization study helps to understand local conditions and provides data that factors into treatment plant sizing. It can also help to estimate the value of the products that can be derived from the treatment process.
The main physico-chemical parameters commonly measured to characterize fecal sludge include: BOD, total suspended solids, % solids, indication of sand, COD, ammonium, total nitrogen and total phosphorus, Fats, Oil and Grease (FOG), Sludge Volume Index (SVI), pH, alkalinity.
Relatively little data exists on pathogen content in fecal sludge. One study from rural Bangladesh determined 41 helminth eggs per g of fecal sludge from pit latrines.
The characteristics of fecal sludge may be influenced by:
Methods, techniques and the skill levels of personnel conducting the desludging;
The efficiency of the different types of equipment used in desludging;
Seasonality – presence of groundwater or flood water that may infiltrate into tanks and dilute the contents;
The last time the tank was desludged (age of fecal sludge).
Conventional treatment processes
Fecal Sludge is often processed through a series of treatment steps to first separate the liquids from the solids, and then treat both the liquid and solid trains while recovering as much of the energy or nutritive value as possible. Common processes at fecal sludge treatment plants include:
Fecal sludge reception – where the truck interfaces with the treatment plant and sludge is unloaded.
Preliminary treatment – to remove garbage, sand, grit, and FOG (fats, oil and grease)
Primary treatment – simple separation of liquid and solids by physical means (dewatering and thickening), e.g. with drying beds
Liquids treatment – for example by using constructed wetlands, waste stabilization ponds, anaerobic digesters
Solids processing – using the solids resulting from fecal sludge treatment for beneficial use where possible.
Constructed wetlands are gaining attention as a low-cost treatment technology that can be constructed in many instances using local materials and labor. For sites with enough land and a ready supply of gravel and sand, this technology offers low cost, scalability, and simple operation.
Drying beds
Simple sludge drying beds can be used for dewatering and drying, as they are a cheap and simple method to dry fecal sludge (they are also widely used to dry sewage sludge). Drainage water must be captured; drying beds are sometimes covered but usually left uncovered. Drying beds are typically composed of four layers (from top to bottom): Sludge, sand, fine gravel, coarse gravel and drainage pipes.
Fecal sludges behave differently during dewatering processes than wastewater sludges. The amount of extracellular polymeric substances (EPS) can be an important predictor for fecal sludge dewatering performance. Fecal sludge from public toilets took longer to dewater than sludge from other sources, and had turbid supernatant after settling.
Grasses with adventitious roots may also be planted in drying beds, allowing for reduction of odor, collection over longer periods, production of forage, and more decomposition of the final biosolids by the time they are extracted. The roots introduce oxygen and maintain the permeability of the sludge. Earthworms may also play an important role in such beds.
Emerging technologies
Emerging technologies for fecal sludge treatment include:
Technologies that can produce a dried or carbonized solid fuel from fecal sludge include: drying, pelletizing, hydrothermal carbonization, and slow pyrolysis.
Thermal processes which can achieve cost effectiveness by eliminating the need for separate processes. They convert the fecal sludge along with certain fractions of sewage sludge or municipal solid waste to produce energy or fuel by using certain sewage sludge treatment technologies.
Biodiesel can be manufactured by using fats, oils and grease as feedstocks. Research by RTI International is being conducted to use fecal sludge for biodiesel production.
Electricity can be produced by thermal processes that burn fecal and solid waste together to maintain stable combustion and the heat is used to make steam that drives generators.
Solar thermal dryers
Solar thermal dryers rely on the collection of the solar thermal energy for drying and pasteurization of fecal sludge. In these systems, the sludge is placed inside an enclosure of transparent or opaque walls, with a ventilation system for moisture evacuation. The sludge can be dried by hot air that was heated by a solar thermal collector (indirect solar dryer), by direct exposure to solar radiation (direct solar dryer), or by both modes (mixed solar dryer).
On site treatment using Mobile Treatment Units (MTUs)
The Water Sanitation and Hygiene Institute of India has developed a truck based mobile treatment unit that is able to treat fecal sludge on site. The MTUs were evaluated in a technical paper authored by Aaron Forbis-Stokes. The system was evaluated for operational and treatment performance while processing septage in the field at 108 sites in Tamil Nadu, India. This option is preferable as it does not require transport of the septage and avoids the common practice of illegal disposal of untreated septage into the environment. Six mobile septage treatment units have been built to date using readily available filters and membranes (mesh fabric, sand, granular activated carbon (GAC), microfilter, ultrafilter) and installed on the bed of a small truck. The target application is emptying of septic or sewage holding tanks and concentration of suspended solids while generating a liquid that could be safely discharged. With support from a USAID grant, the WASH Institute is working to scale the MTU solution as the preferred option over traditional vacuum trucks that discharge wastes into the environment.
Co-treatment at wastewater treatment plants
Co-treatment of septage at wastewater treatment plants may be considered where the volume of septage removed from on-site facilities is small, as will be the case in situations where most households have access to sewerage. However, the high strength of septage and fecal sludge means that relatively small volumes of both can have a large impact on the organic, suspended solids, and nitrogen loads on a wastewater treatment plant. Possible consequences include an increase in the volume of screenings and grit requiring removal; increased odour emission at headworks; increased scum and sludge accumulation rates; and increased organic loading, leading to overloading and process failure, and the potential for increased odour and foaming in aeration tanks. Because of their partly digested nature, septage and fecal sludge will usually degrade at a slower rate than municipal wastewater. Therefore, their presence is likely to have an adverse impact on the efficacy of treatment processes. The intermittent nature of fecal sludge and septage loading can also amplify the problems identified above.
Despite these possible drawbacks, wastewater treatment facilities with spare capacity are a potential resource to be investigated. Even where co-treatment is not an option, existing wastewater treatment plants may provide land in strategic locations, close to areas of demand for septage management services. Separate preliminary treatment and solids-liquid separation facilities should always be provided for septage/fecal sludge. Solids-liquid separation will reduce both the overall load and the proportion of digested material in the liquid fraction and will thus lessen the possibility that it will disrupt wastewater treatment processes. Separated solids can be treated along with the sludge produced in sedimentation tanks during the wastewater treatment process.
Technology selection
A formal process should be used for making an informed technology selection for the treatment of the fecal sludge. It is usually a collaborative process conducted by stakeholders, consultants, the operator and the future owner of the facility. The process is based on a long term vision planning with stakeholders as part of citywide sanitation planning. The expected waste flows (volume), their strength, characteristics, and variability in each area need to be known. A formal and transparent process for developing appropriate plans and designs for wastewater and fecal sludge treatment plants will achieve local buy-in and ownership of technology decisions, which is critical for the long term success and sustainability of the program.
Reuse options
Resource recovery from fecal sludge can take many forms, including as a fuel, soil amendment, building material, protein, animal fodder, and water for irrigation. Some of the by-products from fecal sludge treatment processes have the potential to offset some of the costs of collection and treatment, thereby reducing tariffs for the households. However, value addition all the way to biogas, biodiesel and electricity is difficult to achieve in practice due to technological and operational challenges.
Composting
Composting is a process whereby organic matter is digested in the presence of oxygen with the byproduct of heat. For fecal sludge, the heat deactivates the pathogens while the digestion process breaks down the organic matter into a humus-like material that acts as a soils amendment, and nutrients that are broken down into a form that is more easily taken up by plants. Properly treated fecal sludge can be reused in agriculture.
Fecal sludge is rich in nitrogen. When fecal sludge is mixed with materials that are rich in carbon, such as shredded crop wastes, the composting process can be maximized. Proper mixture to achieve a ratio of 20 to 1 to 30 to 1 of carbon to nitrogen is best.
Solid fuel
Resource recovery as a solid fuel has been found to have high market potential in Sub-Saharan Africa. The selection of the fuel type will depend on: (1) the intended use of the fuel (e.g. combustion technology, user/handling requirements, and amount required); and (2) the properties of the input fecal sludge (e.g. level of stabilization, sand content, and moisture content). Once suitable technology options are identified, they must subsequently be evaluated for best fit in the local context (e.g. local capacity for electricity, land, and technical (operation and maintenance) requirements).
Others
Biogas is a renewable energy that is a byproduct of the anaerobic digestion process.
Treated effluent can be used for agricultural or landscape irrigation.
Costs and fees
FSM is considered an entry point for sanitation improvement programs that are led by local governments. Such programs may include tariffs or user fees, promotions campaigns to raise the willingness to pay for the service, and local ordinances that define the rules and regulations governing FSM. In the Philippines, tariffs around US$1 per family per month are generally enough to achieve full cost recovery within a period of 3 to 7 years. Promotional campaigns are used to raise the willingness to pay for services, and local procedures and ordinances provide additional incentives for compliance.
Synergies with other sectors
FSM is but one aspect of citywide sanitation that also includes:
Municipal solid waste management;
Drainage and greywater management;
Wastewater collection and treatment including effluent overflows from on-site systems where soils based dispersal systems are insufficient to assimilate the volume;
Water safety; and
Food safety.
There are important synergies between many of these services and FSM, and investigating co-management opportunities can yield benefits. MSW can often be co-managed with fecal waste, especially when thermal treatment technologies are used. Food waste from restaurants and markets can be co-composted with fecal waste to produce a high value soils amendment. Fats, Oil and Grease (FOG) from commercial grease traps can be added to biodigesters to increase methane production, or used in conjunction with fecal sludge as a feedstock for biodiesel production. Water supply is also closely linked with FSM as it is often the water utility that will manage programs and their customers that will pay for services through tariffs.
Examples
Dumaguete, the Philippines
USAID has supported efforts to introduce scheduled desludging services in some countries in Southeast Asia. The first of these was in Dumaguete in the Philippines. The program was run jointly by the city government and the Dumaguete City Water District, with the former operating the treatment plant and the Water District conducting the desludging. The cost of the scheme was covered by adding a tariff of 2 pesos (about 5 US cents) to the water bill for each cubic meter of water consumed (about one US dollar per family per month). This approach was possible because around 95% of residents had a connection to the Water District reticulation system. Trucks were to move from neighborhood to neighborhood on a scheduled cycle, emptying pits on a regular 3–4 year cycle. This approach requires a database of all pits and septic tanks requiring desludging. However, Dumaguete has by 2018 reverted to an 'on-call' system, the cost of which is still covered by the surcharge on the water tariff. It seems that users prefer this small regular payment to having to make large payments when tanks require desludging.
See also
Sewage Sludge
Biosolids - treated sewage sludge
Ecological Sanitation
Nightsoil - a historical term for a material similar to fecal sludge
Sewage sludge treatment
WASH - water supply, sanitation, hygiene
References
External links
Sustainable Sanitation Alliance library (documents on FSM)
Documents about fecal sludge management in the library of the Sustainable Sanitation Alliance (SuSanA)
Faecal Sludge Management (FSM) book – Systems Approach for Implementation and Operation by EAWAG (2014), in English and several other languages
Sanitation
Sludge management
Sewerage
Environmental engineering | Fecal sludge management | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 7,464 | [
"Chemical engineering",
"Excretion",
"Water pollution",
"Sewerage",
"Animal waste products",
"Civil engineering",
"Feces",
"Environmental engineering"
] |
60,355,556 | https://en.wikipedia.org/wiki/Buffer-gas%20trap | The buffer-gas trap (BGT) is a device used to accumulate positrons (the antiparticles of electrons) efficiently while minimizing positron loss due to annihilation, which occurs when an electron and positron collide and the energy is converted to gamma rays. The BGT is used for a variety of research applications, particularly those that benefit from specially tailored positron gases, plasmas and/or pulsed beams. Examples include use of the BGT to create antihydrogen and the positronium molecule.
Design and operation
The schematic design of a BGT is illustrated in Fig. 1. It consists of a specially designed (Penning or Penning–Malmberg) type electromagnetic trap. Positrons are confined in a vacuum inside an electrode structure consisting of a stack of hollow, cylindrical metal electrodes such as that shown in Fig. 2. A uniform axial magnetic field inhibits positron motion radially, and voltages imposed on end electrodes prevent axial loss. Such traps are renowned for their good confinement properties for particles (such as positrons) of a single sign of charge.
Given a trap designed for good confinement, a remaining challenge is to efficiently fill the device. In the BGT, this is accomplished using a series of inelastic collisions with a molecular gas. In a positron-molecule collision, annihilation is much less probable than energy loss due to electronic or vibrational excitation. The BGT has a stepped potential well (Fig. 1) with regions at successively lower gas pressure. Electronic excitation of molecular nitrogen (N2) in the highest-pressure region is used to trap the positrons. This process is repeated until the particles are in a sufficiently low-pressure environment and the annihilation time is acceptably long. The particles cool to the ambient gas temperature due to inelastic vibrational and rotational collisions.
Trap efficiency is typically 5 – 30%, but can be as much as 40%. Positronium (Ps) formation via charge-exchange (e.g., e++ N2-> N2++ Ps) is a major loss process. Molecular nitrogen is used because it is unique in having an electronic energy level below the threshold for Ps formation; hence it is the trapping gas of choice. Similarly, carbon tetrafluoride (CF4) and sulfur hexafluoride (SF6) have very large vibrational excitation cross sections, and so these gases are used for cooling to the ambient temperature (typically ~ 300 K).
While most positron sources produce positrons with energies ranging from a few kiloelectronvolts (keV) to more than 500 keV, the BGT is only useful for much lower energy particles (i.e. less than or equal to tens of electronvolts). Thus, high-energy positrons from such sources are injected into the surfaces of materials (so-called positron moderators) in which they lose energy, diffuse to the surface, and are re-emitted with electronvolt energies. The moderator of choice for the BGT is solid neon (~ 1% conversion efficiency ), frozen on a cold metal surface.
The lifetime in the final trapping stage is limited by annihilation and is typically less than or equal to 100 seconds, which limits the total number of trapped positrons. If larger particle numbers are desired, the positrons are transferred to an ultra-high vacuum (UHV) Penning–Malmberg trap in a several Tesla magnetic field. Annihilation is negligible in UHV. Positron cooling (necessary to combat heating due to extrinsic effects) is now due to the emission of cyclotron radiation in the large magnetic field. This accumulation and transfer process can then be repeated to build up larger collections of antimatter.
History and uses
The BGT was invented in the 1980s, originally intended to study positron transport in tokamak (fusion) plasmas. Subsequently, the technique was refined and is now used in laboratories worldwide for a variety of applications. They include study of positron interactions with atoms and molecules, materials, and material surfaces; the creation of antihydrogen, the positronium molecule (i.e., Ps2, e+e−e+e−), and novel positron and positronium beams. BGTs are also expected to play similarly important roles in efforts to create and study positronium atom Bose–Einstein condensates (BEC) and a classical electron-positron "pair" plasmas.
See also
Penning trap
Non-neutral plasmas
Positron annihilation
References
Particle traps | Buffer-gas trap | [
"Chemistry"
] | 989 | [
"Particle traps",
"Molecular physics"
] |
60,357,139 | https://en.wikipedia.org/wiki/CEE%207%20standard%20AC%20plugs%20and%20sockets | CEE 7 is a standard for alternating-current plugs and sockets. First published in 1951 by the former International Commission on the Rules for the Approval of Electrical Equipment (IECEE), it unified standards produced by several continental European countries. The 2nd edition was published in 1963 and last updated in March 1983. It remains available from the IEC, but they state that "this standard shall not be used alone, it is to be used in addition to IEC 60884-1".
CEE 7 standards
The International Commission on the Rules for the Approval of Electrical Equipment (IECEE) was a standards body which published Specification for plugs and socket-outlets for domestic and similar purposes as CEE Publication 7, known simply as CEE 7. It was originally published in 1951, the 2nd edition was published in May 1963 and was last updated by Modification 4 in March 1983. CEE 7 consists of general specifications, plus a number of standard sheets for specific connectors.
A number of standards based on two round pins with centres spaced at 19 mm are in use in continental Europe and elsewhere, most of which are listed in IEC/TR 60083 "Plugs and socket-outlets for domestic and similar general use standardized in member countries of IEC". There is no European Union regulation of domestic mains plugs and sockets, and the Low Voltage Directive specifically excludes domestic plugs and sockets. EU countries each have their own regulations and national standards; for example, some require child-resistant shutters, while others do not. CE marking is neither applicable nor permitted on plugs and sockets.
CEE 7/1 unearthed socket and CEE 7/2 unearthed plug
CEE 7/1 unearthed sockets are designed to accept CEE 7/2 round plugs without notches in the body and having pins.
Because they have no earth connections they have been or are being phased out in most countries. The regulations of countries using the CEE 7/3 and CEE 7/5 socket standards vary in whether CEE 7/1 sockets are still permitted in environments where the need for earthing is less critical. Sweden, for example, prohibited them from new installations in 1994. In Germany unearthed sockets are rare, whereas in the Netherlands and Sweden it is still common to find them in "dry areas" such as in bedrooms or living rooms. Some countries prohibit use of unearthed and earthed sockets in the same room, in the "insulated room" concept, so that people cannot touch an earthed object and one that has become live, at the same time.
The depth of the sockets varies between countries and age. Older sockets are so shallow that it is possible to touch the pins of a plug when the plug is inserted only deep enough to get electrical power on the pins, while newer sockets are deep enough to protect from this kind of accident. CEE 7/1 sockets accept CEE 7/4, CEE 7/6 and CEE 7/7 plugs without providing an earth connection. The earthed CEE 7/3 and CEE 7/5 sockets were specifically designed not to allow insertion of CEE 7/2 unearthed round plugs fitted to older appliances which had to be earthed via other means.
CEE 7/3 socket and CEE 7/4 plug (German "Schuko"; Type F)
The CEE 7/3 socket and CEE 7/4 plug are commonly called Schuko. The socket (which is often, in error, also referred to as CEE 7/4) has a predominantly circular recess which is deep with two symmetrical round apertures and two earthing clips on the sides of the socket positioned to ensure that the earth is always engaged before live pin contact is made. The plug pins are . The Schuko connection system is symmetrical and unpolarised in its design, allowing line and neutral to be reversed. The socket also accepts Europlugs and CEE 7/17 plugs. It is rated at 16 A. The current German standards are DIN 49441:1972-06 "Two-pole plugs with earthing-contact 10 A 250 V≅ and 10 A 250 V–, 16 A 250 V~" (which also includes CEE 7/7 plug) and DIN 49440-1:2006-01 "Two-pole socket-outlets with earthing contact, 16 A 250 V a.c."
In addition to Germany, it is used in Albania, Austria, Belarus, Bosnia and Herzegovina, Bulgaria, Chile, Croatia, Denmark, Estonia, Finland, Georgia, Greece, Hungary, Iceland, Indonesia, Iran, Italy (standard CEI 23-50), Kazakhstan, Latvia, Lithuania, Luxembourg, North Macedonia, Moldova, the Netherlands, Norway, Pakistan, Peru, Portugal, Romania, Russia, Serbia, Slovenia, South Korea, Spain, Sweden, Turkey, Ukraine, and Uruguay.
It was widely used in Ireland until 1964, a legacy of Ireland's early electricity grid which was largely built based on design work on the Shannon hydroelectric scheme by Siemens-Schuckert. The British BS1363 (localised as Irish Standard I.S.401) was adopted as the new standard plug to ease import of electrical appliances from the UK. CEE 7/1 and CEE 7/4 are still occasionally found in less used areas of some older homes, particularly in outbuildings or hot presses.
Schuko is an abbreviation for the German word Schutzkontakt, which means "Protective contact" - in this case "protective" refers to the earth.
Some countries, including South Korea, Portugal, Finland, Denmark, Norway and Sweden, require child-proof socket shutters; the German DIN 49440-1:2006-01 standard does not have this requirement.
CEE 7/5 socket and CEE 7/6 plug (French; Type E)
The CEE 7/5 socket and CEE 7/6 plug are defined in French standard NF C 61-314 "Plugs and socket-outlets for household and similar purposes" (which also includes CEE 7/7, 7/16 and 7/17 plugs) The socket has a predominantly circular recess which is deep with two symmetrical round apertures and a round earth pin projecting from the socket such that the tip is beyond the live contacts, to ensure that the earth is always engaged before live pin contact is made. The earth pin is centred between the apertures, offset by . The plug (which is often, in error, also referred to as CEE 7/5) has two round pins measuring , spaced apart and with an aperture for the socket's projecting earth pin. This standard is also used in Belgium, Denmark, Poland, Czechia, Slovakia and some other countries.
Although the plug is polarised, CEE 7 does not define the placement of the line and neutral and there is no universally-observed standard. However, the Czech and Slovak standards defined that the earth pin should be at the top, if no special reason against, and the line wire had to be on the left side when facing the socket; those requirements are abandoned now. The French convention changed circa 2002 from nothing particular, to, if the earth pin was at the top then the line hole in the socket would be on the right looking at the socket. However, the socket may not necessarily be installed with the earth pin at the top. Packaging in France of sockets is normally marked with correct connection of the cables. Polarised pre-fitted plugs on appliances are therefore connected with the brown line wire to the right pin and the blue neutral wire to the left, the earth being connected to the contact at the "top" of the plug.
CEE 7/2 and 7/4 plugs are not compatible with the CEE 7/5 socket because of the round earthing pin permanently mounted in the socket.
CEE 7/7 plug (compatible with E and F)
To bridge the differences between German and French standards, the CEE 7/7 plug was developed. It is polarised to prevent the line and neutral connections from being reversed when used with a French CEE 7/5 socket, but allows polarity reversal when inserted into a German CEE 7/3 socket. The plug is rated at 16 A.
It has earthing clips on both sides to connect with the CEE 7/3 socket and a female contact to accept the earthing pin of the CEE 7/5 socket. Currently, appliances in many countries are sold with non-rewireable CEE 7/7 plugs attached, enabling use in all countries whose socket standards are based on either CEE 7/3 or CEE 7/5.
This plug can be inserted into a Danish Type K socket, but earthing is not enabled.
CEE 7/16 plugs
The CEE 7/16 standard sheet appears in Supplement 2 (June 1962) to the 1951 edition of CEE 7. The CEE 7/16 unearthed plug is used for low power Class II applications, it has two round 4 by 19 mm (0.157 by 0.748 in) pins, rated at 2.5 A. There are two variants.
CEE 7/16 Alternative I
Alternative I is a round plug with cutouts to make it compatible with CEE 7/3 and CEE 7/5 sockets. (The similar-appearing CEE 7/17 has larger pins and a higher current rating.) This alternative is seldom used.
CEE 7/16 Alternative II "Europlug" (Type C)
Alternative II, popularly known as the Europlug, is a flat plug defined by Cenelec standard EN 50075 and national equivalents. The Europlug is not rewirable and must be supplied with a flexible cord. It is rated for voltages up to 250 V and currents up to 2.5 A. It can be inserted in either direction, so line and neutral are connected arbitrarily.
There is no socket defined by EN 50075; nor a socket specified in CEE 7 to accept only the Europlug. Instead, the Europlug was designed to be compatible with a range of sockets in common use in Europe. These sockets, including the CEE 7/1, CEE 7/3 (German/"Schuko"), CEE 7/5 (French), and most Israeli, Swiss, Danish and Italian sockets, were designed to accept pins of various diameters, mainly 4.8 mm but also 4.0 mm and 4.5 mm, and are usually fed by final circuits with either 10 A or 16 A overcurrent protection devices. To improve contact with socket parts the Europlug has slightly flexible pins which converge toward their free ends.
UK shaver sockets designed to accept BS 4573 shaver plugs also accept Europlugs for applications requiring less than 200 mA. Other than such personal hygiene applications, UK consumer protection legislation does not permit Europlugs.
The Europlug is also used in the Middle East, Africa, South America, and Asia.
CEE 7/17 unearthed plug
This is a round plug which conforms to a shape compatible with CEE 7/1, CEE 7/3, and CEE 7/5 sockets. It has two round pins measuring . It may be rated at either 10 A or 16 A, and may be used for unearthed Class II appliances (and in South Korea for all domestic non-earthed appliances). It is also defined as the Class II plug in Italian standard CEI 23-50. It can be inserted into Israeli SI 32 with some difficulty. The Soviet GOST 7396 standard includes both the CEE 7/17 and the CEE 7/16 variant II plug.
See also
Mains electricity by country
AC power plugs and sockets: British and related types
NEMA connector
References
Electrical standards | CEE 7 standard AC plugs and sockets | [
"Physics"
] | 2,436 | [
"Physical systems",
"Electrical standards",
"Electrical systems"
] |
60,360,425 | https://en.wikipedia.org/wiki/Hartle%E2%80%93Thorne%20metric | The Hartle–Thorne metric is an approximate solution of the vacuum Einstein field equations of general relativity that describes the exterior of a slowly and rigidly rotating, stationary and axially symmetric body.
The metric was found by James Hartle and Kip Thorne in the 1960s to study the spacetime outside neutron stars, white dwarfs and supermassive stars. It can be shown that it is an approximation to the Kerr metric (which describes a rotating black hole) when the quadrupole moment is set as , which is the correct value for a black hole but not, in general, for other astrophysical objects.
Metric
Up to second order in the angular momentum , mass and quadrupole moment , the
metric in spherical coordinates is given by
where
See also
Kerr metric
References
General relativity
Metric tensors | Hartle–Thorne metric | [
"Physics",
"Engineering"
] | 160 | [
"Tensors",
"General relativity",
"Metric tensors",
"Relativity stubs",
"Theory of relativity"
] |
60,361,376 | https://en.wikipedia.org/wiki/Microbial%20DNA%20barcoding | Microbial DNA barcoding is the use of DNA metabarcoding to characterize a mixture of microorganisms. DNA metabarcoding is a method of DNA barcoding that uses universal genetic markers to identify DNA of a mixture of organisms.
History
Using metabarcoding to assess microbial communities has a long history. Back in 1972, Carl Woese, Mitchell Sogin and Stephen Sogin first tried to detect several families within bacteria using the 5S rRNA gene. Only a few years later, a new tree of life with three domains was proposed by again Woese and colleagues, who were the first to use the small subunit of the ribosomal RNA (SSU rRNA) gene to distinguish between bacteria, archaea and eukaryotes. Out of this approach, the SSU rRNA gene made its way to be the most frequently used genetic marker for both prokaryotes (16S rRNA) and eukaryotes (18S rRNA). The tedious process of cloning those DNA fragments for sequencing got fastened up by the steady improvement of sequencing technologies. With the development of HTS (High-Throughput-Sequencing) in the early 2000s and the ability to deal with this massive data using modern bioinformatics and cluster algorithms, investigating microbial life got much easier.
Genetic markers
Genetic diversity is varying from species to species. Therefore, it is possible to identify distinct species by the recovery of a short DNA sequence from a standard part of the genome. This short sequence is defined as barcode sequence. Requirements for a specific part of the genome to serve as barcode should be a high variation between two different species, but not much differences in the gene between two individuals of the same species to make differentiating individual species easier. For both bacteria and archaea the 16S rRNA/rDNA gene is used. It is a common housekeeping gene in all prokaryotic organisms and therefore is used as a standard barcode to assess prokaryotic diversity. For protists, the corresponding 18S rRNA/rDNA gene is used. To distinguish different species of fungi, the ITS (Internal Transcribed Spacer) region of the ribosomal cistron is used.
Advantages
The existing diversity of the microbial world is not unraveled completely yet, although we know that it is mainly composed by bacteria, fungi and unicellular eukaryotes. Taxonomic identification of microbial eukaryotes requires exceedingly skillful expertise and is often difficult due to small sizes of the organisms, fragmented individuals, hidden diversity and cryptic species. Further, prokaryotes can simply not be taxonomically assigned using traditional methods like microscopy, because they are too small and morphologically indistinguishable. Therefore, via the use of DNA metabarcoding, it is possible to identify organisms without taxonomic expertise by matching short High Throughput Sequences (HTS)-derived gene fragments to a reference sequence database, e.g. NCBI. These mentioned qualities make DNA barcoding a cost-effective, reliable and less time-consuming method, compared to the traditional ones, to meet the increasing need for large-scale environmental assessments.
Applications
A lot of studies followed the first usage of Woese et al., and are now covering a variety of applications. Not only in biological or ecological research metabarcoding is used. Also in medicine and human biology bacterial barcodes are used, e.g. to investigate the microbiome and bacterial colonization of the human gut in normal and obese twins or comparison studies of newborn, child and adult gut bacteria composition. Additionally, barcoding plays a major role in biomonitoring of e.g. rivers and streams and grassland restoration. Conservation parasitology, environmental parasitology and paleoparasitology rely on barcoding as a useful tool in disease investigating and management, too.
Cyanobacteria
Cyanobacteria are a group of photosynthetic prokaryotes. Similar as in other prokaryotes, taxonomy of cyanobacteria using DNA sequences is mostly based on similarity within the 16S ribosomal gene. Thus, the most common barcode used for identification of cyanobacteria is 16S rDNA marker. While it is difficult to define species within prokaryotic organisms, 16S marker can be used for determining individual operational taxonomic units (OTUs). In some cases, these OTUs can also be linked to traditionally defined species and can therefore be considered a reliable representation of the evolutionary relationships.
However, when analyzing a taxonomic structure or biodiversity of a whole cyanobacterial community (see DNA metabarcoding), it is more informative to use markers specific for cyanobacteria. Universal 16S bacterial primers have been used successfully to isolate cyanobacterial rDNA from environmental samples, but they also recover many bacterial sequences. The use of cyanobacteria-specific or phyto-specific 16S markers is commonly used for focusing on cyanobacteria only. A few sets of such primers have been tested for barcoding or metabarcoding of environmental samples and gave good results, screening out majority of non-photosynthetic or non-cyanobacterial organisms.
Number of sequenced cyanobacterial genomes available in databases is increasing. Besides 16S marker, phylogenetic studies could therefore include also more variable sequences, such as sequences of protein-coding genes (gyrB, rpoC, rpoD, rbcL, hetR, psbA, rnpB, nifH, nifD), internal transcribed spacer of ribosomal RNA genes (16S-23S rRNA-ITS) or phycocyanin intergenic spacer (PC-IGS). However, nifD and nifH can only be used for identification of nitrogen-fixing cyanobacterial strains.
DNA barcoding of cyanobacteria can be applied in various ecological, evolutionary and taxonomical studies. Some examples include assessment of cyanobacterial diversity and community structure, identification of harmful cyanobacteria in ecologically and economically important waterbodies and assessment of cyanobacterial symbionts in marine invertebrates. It has a potential to serve as a part of routine monitoring programs for occurrence of cyanobacteria, as well as early detection of potentially toxic species in waterbodies. This might help us detect harmful species before they start to form blooms and thus improve our water management strategies. Species identification based on environmental DNA could be particularly useful for cyanobacteria, as traditional identification using microscopy is challenging. Their morphological characteristics which are the basis for species delimitation vary in different growth conditions. Identification under microscope is also time-consuming and therefore relatively costly. Molecular methods can detect much lower concentration of cyanobacterial cells in the sample than traditional identification methods.
Reference databases
The reference database is a collection of DNA sequences, which are assigned to either a species or a function. It can be used to link molecular obtained sequences of an organism to pre-existing taxonomy. General databases like the NCBI platform include all kind of sequences, either whole genomes or specific marker genes of all organisms. There are also different platforms where only sequences from a distinct group of organisms are stored, e.g. UNITE database exclusively for fungi sequences or the PR2 database solely for protist ribosomal sequences. Some databases are curated, which allows a taxonomic assignment with higher accuracy than using uncurated databases as a reference.
See also
Consortium for the Barcode of Life
Algae DNA barcoding
DNA Barcoding
DNA barcoding in diet assessment
Fish DNA barcoding
References
Authentication methods
Bioinformatics
Biometrics
Molecular genetics
Taxonomy (biology)
DNA barcoding | Microbial DNA barcoding | [
"Chemistry",
"Engineering",
"Biology"
] | 1,611 | [
"Genetics techniques",
"Biological engineering",
"Taxonomy (biology)",
"DNA barcoding",
"Bioinformatics",
"Molecular genetics",
"Molecular biology",
"Phylogenetics"
] |
60,361,757 | https://en.wikipedia.org/wiki/Fish%20DNA%20barcoding | DNA barcoding methods for fish are used to identify groups of fish based on DNA sequences within selected regions of a genome. These methods can be used to study fish, as genetic material, in the form of environmental DNA (eDNA) or cells, is freely diffused in the water. This allows researchers to identify which species are present in a body of water by collecting a water sample, extracting DNA from the sample and isolating DNA sequences that are specific for the species of interest. Barcoding methods can also be used for biomonitoring and food safety validation, animal diet assessment, assessment of food webs and species distribution, and for detection of invasive species.
In fish research, barcoding can be used as an alternative to traditional sampling methods. Barcoding methods can often provide information without damage to the studied animal.
Aquatic environments have unique properties that affect how genetic material from organisms is distributed. DNA material diffuses rapidly in aquatic environments, which makes it possible to detect organisms from a large area when sampling a specific spot. Due to rapid degradation of DNA in aquatic environments, detected species represent contemporary presence, without confounding signals from the past.
DNA-based identification is fast, reliable and accurate in its characterization across life stages and species. Reference libraries are used to connect barcode sequences to single species and can be used to identify the species present in DNA samples. Libraries of reference sequences are also useful in identifying species in cases of morphological ambiguity, such as with larval stages.
eDNA samples and barcoding methods are used in water management, as species composition can be used as an indicator of ecosystem health. Barcoding and metabarcoding methods are particularly useful in studying endangered or elusive fish, as species can be detected without catching or harming the animals.
Applications
Ecological monitoring
Biomonitoring of aquatic ecosystems is required by national and international legislation (e.g. the Water Framework Directive and the Marine Strategy Framework Directive). Traditional methods are time-consuming and include destructive practices that can harm individuals of rare or protected species. DNA barcoding is a relatively cost-effective and quick method for identifying fish species aquatic environments. Presence or absence of key fish species can be established using eDNA from water samples and spatio-temporal distribution of fish species (e.g. timing and location of spawning) can be studied. This can help discover e.g. impacts of physical barriers such as dam construction and other human disturbances. DNA tools are also used in dietary studies of fish and the construction of aquatic food webs. Metabarcoding of fish gut contents or feces identify recently consumed prey species. However, secondary predation must be taken into consideration.
Invasive species
Early detection is vital for control and removal of non-indigenous, ecologically harmful species (e.g. lion fish (Pteroissp.) in the Atlantic and Caribbean). Metabarcoding of eDNA can be used to detect cryptic or invasive species in aquatic ecosystems.
Fisheries management
Barcoding and metabarcoding approaches yield rigorous and extensive data on recruitment, ecology and geographic ranges of fisheries resources. The methods also improve knowledge of nursery areas and spawning grounds, with benefits for fisheries management. Traditional methods for fishery assessment can be highly destructive, such as gillnet sampling or trawling. Molecular methods offers an alternative for non-invasive sampling. For example, barcoding and metabarcoding can help identifying fish eggs to species to ensure reliable data for stock assessment, as it has proven more reliable than identification via phenotypic characters. Barcoding and metabarcoding are also powerful tools in monitoring of fisheries quotas and by-catch.
eDNA can detect and quantify the abundance of some anadromous species as well as their temporal distribution. This approach can be used to develop appropriate management measures, of particular importance for commercial fisheries.
Food safety
Globalisation of food supply chains has led to an increased uncertainty of the origin and safety of fish-based products. Barcoding can be used to validate the labelling of products and to trace their origin. “Fish fraud” has been discovered across the globe. A recent study from supermarkets in the state of New York found that 26.92% of seafood purchases with an identifiable barcode were mislabelled.
Barcoding can also trace fish species as there can be human health hazards related to consumption of fish. Further, biotoxins can occasionally be concentrated when toxins move up the food chain. One example relates to coral reef species where predatory fish such as barracuda have been detected to cause Ciguatera fish poisoning. Such new associations of fish poisoning can be detected by the use of fish barcoding.
Protection of endangered species
Barcoding can be used in the conservation of endangered species through the prevention of illegal trading of CITES listed species. There is a large black market for fish based products and also in the aquarium and pet trades. To protect sharks from overexploitation, illegal use can be detected from barcoding shark fin soup and traditional medicines.
Methodology
Sampling in aquatic environments
Aquatic environments have special attributes that need to be considered when sampling for fish eDNA metabarcoding. Seawater sampling is of particular interest for assessment of health of marine ecosystems and their biodiversity. Although the dispersion of eDNA in seawater is large and salinity negatively influences DNA preservation, a water sample can contain high amounts of eDNA from fish up to one week after sampling. Free molecules, intestinal lining and skin cell debris are the main sources of fish eDNA.
In comparison to marine environments, ponds have biological and chemical properties that can alter eDNA detection. The small size of ponds compared to other water bodies makes them more sensitive to environmental conditions such as exposure to UV light and changes in temperature and pH. These factors can affect the amount of eDNA. Moreover, trees and dense vegetation around ponds represent a barrier that prevents water aeration by wind. Such barriers can also promote the accumulation of chemical substances that damage eDNA integrity. Heterogeneous distribution of eDNA in ponds may affect detection of fishes. Availability of fish eDNA is also dependent of life stage, activity, seasonality and behavior. The largest amounts of eDNA are obtained from spawning, larval stages and breeding activity.
Target regions
Primer design is crucial for metabarcoding success. Some studies on primer development have described cytochrome B and 16S as suitable target regions for fish metabarcoding. Evans et.al. (2016) described that Ac16S and L2513/H2714 primer sets are able to detect fish species accurately in different mesocosms. Another study performed by Valentini et.al. (2016) showed that the L1848/H1913 primer pair, which amplifies a region of 12S rRNA locus, was able to reach high taxonomical coverage and discrimination even with a short target fragment. This research also evidenced that in 89% of sampling sites, metabarcoding approach was similar or even higher than traditional methods (e.g. electrofishing and netting methods). Hänfling et.al. (2016) performed metabarcoding experiments focused on lake fish communities using 12S_F1/12S_R1 and CytB_L14841/CytB_H15149 primer pairs, whose targets were located in the mitochondrial 12S and cytochrome B regions respectively. The results demonstrate that detection of fish species was higher when using 12S primers than CytB. This was due to the persistence of shorter 12S fragments (~100 bp) in comparison to larger CytB amplicon (~460 bp). In general, these studies summarize that special considerations about primer design and selection have to be taken according to the objectives and nature of the experiment.
Fish reference databases
There are a number of open access databases available to researchers worldwide. The proper identification of fish specimens with DNA barcoding methods relies heavily on the quality and species coverage of available sequence databases. A fish reference database is an electronic database that typically contains DNA barcodes, images, and geospatial coordinates of examined fish specimens. The database can also contain linkages to voucher specimens, information on species distributions, nomenclature, authoritative taxonomic information, collateral natural history information and literature citations. Reference databases may be curated, meaning that the entries are subjected to expert assessment before being included, or uncurated, in which case they may include a large number of reference sequences but with less reliable identification of species.
FISH-BOL
Launched in 2005, The Fish Barcode of Life Initiative (FISH-BOL) www.fishbol.org is an international research collaboration that is assembling a standardized reference DNA sequence library for all fish species. It is a concerted global research project with the goal to collect and assemble standardized DNA barcode sequences and associated voucher provenance data in a curated reference sequence library to aid the molecular identification of all fish species.
If researchers wish to contribute to the FISH-BOL reference library, clear guidelines are provided for specimen collection, imaging, preservation, and archival, as well as meta-data collection and submission protocols. The Fish-BOL database functions as a portal to the Barcode of Life Data Systems (BOLD).
French Polynesia Fish Barcoding Base
The French Polynesia Fish Barcoding Database contains all the specimens captured during several field trips organised or participated in by CRIOBE (Centre for Island Research and Environmental Observatory) since 2006 in the Archipelagos of French Polynesia. For each classified specimen, the following information can be available: scientific name, picture, date, GPS coordinate, depth and method of capture, size, and Cytochrome Oxidase c Subunit 1 (CO1) DNA sequence. The database can be searched using name (genus or species) or using a part of the CO1 DNA sequence.
Aquagene
A collaborative product developed by several German institutions, Aquagene provides free access to curated genetic information of marine fish species. The database allows species identification by DNA sequence comparisons. All species are characterized by multiple gene sequences, presently including the standard CO1 barcoding gene together with CYTB, MYH6 and (coming shortly) RHOD, facilitating unambiguous species determination even for closely related species or those with high intraspecific diversity. The genetic data is complemented online with additional data of the sampled specimen, such as digital images, voucher number and geographic origin.
Additional resources
Other reference databases that are more general, but may also be useful for barcoding fish are the Barcode of Life Datasystem and Genbank.
Advantages
Barcoding/metabarcoding provides quick and usually reliable species identification, meaning that morphological identification, i.e. taxonomic expertise, is not needed. Metabarcoding also makes it possible to identify species when organisms are degraded or only part of an organism is available. It is a powerful tool for detection of rare and/or invasive species, which can be detected despite low abundance. Traditional methods to assess fish biodiversity, abundance and density include the use of gears like nets, electrofishing equipment, trawls, cages, fyke-nets or other gear which show reliable results of presence only for abundant species. Contrary, rare native species, as well as newly established alien species, are less likely to be detected via traditional methods, leading to incorrect absence/presence assumptions. Barcoding/metabarcoding is also in some cases a non-invasive sampling method, as it provides the opportunity to analyze DNA from eDNA or by sampling living organisms.
For fish parasites, metabarcoding allows for detection of cryptic or microscopic parasites from aquatic environments, which is difficult with more direct methods (e.g. identifying species from samples with microscopy). Some parasites exhibit cryptic variation and metabarcoding can be helpful method in revealing this.
The application of eDNA metabarcoding is cost-effective in large surveys or when many samples are required. eDNA can reduce the costs of fishing, transport of samples and time invested by taxonomists, and in most cases requires only small amounts of DNA from target species to reach reliable detection. Constantly decreasing prices for barcoding/metabarcoding due to technical development is another advantage. The eDNA approach is also suitable for monitoring of inaccessible environments.
Challenges
The results obtained from metabarcoding are limited or biased to the frequency of occurrence. It is also problematic that far from all species have barcodes attached to them.
Even though metabarcoding may overcome some practical limitations of conventional sampling methods, there is still no consensus regarding experimental design and the bioinformatic criteria for application of eDNA metabarcoding. The lack of criteria is due to the heterogeneity of experiments and studies conducted so far, which dealt with different fish diversities and abundances, types of aquatic ecosystems, numbers of markers and marker specificities.
Another significant challenge for the method is how to quantify fish abundance from molecular data. Although there are some cases in which quantification has been possible there appears to be no consensus on how, or to what extent, molecular data can meet this aim for fish monitoring.
See also
DNA barcoding
DNA barcoding in diet assessment
Algae DNA barcoding
Microbial DNA barcoding
Aquatic macroinvertebrate DNA barcoding
References
Authentication methods
Bioinformatics
Biometrics
Molecular genetics
DNA barcoding | Fish DNA barcoding | [
"Chemistry",
"Engineering",
"Biology"
] | 2,739 | [
"Genetics techniques",
"Biological engineering",
"DNA barcoding",
"Bioinformatics",
"Molecular genetics",
"Molecular biology",
"Phylogenetics"
] |
48,781,047 | https://en.wikipedia.org/wiki/Glossary%20of%20algebraic%20topology | This is a glossary of properties and concepts in algebraic topology in mathematics.
See also: glossary of topology, list of algebraic topology topics, glossary of category theory, glossary of differential geometry and topology, Timeline of manifolds.
Convention: Throughout the article, I denotes the unit interval, Sn the n-sphere and Dn the n-disk. Also, throughout the article, spaces are assumed to be reasonable; this can be taken to mean for example, a space is a CW complex or compactly generated weakly Hausdorff space. Similarly, no attempt is made to be definitive about the definition of a spectrum. A simplicial set is not thought of as a space; i.e., we generally distinguish between simplicial sets and their geometric realizations.
Inclusion criterion: As there is no glossary of homological algebra in Wikipedia right now, this glossary also includes a few concepts in homological algebra (e.g., chain homotopy); some concepts in geometric topology and differential topology are also fair game. On the other hand, the items that appear in glossary of topology are generally omitted. Abstract homotopy theory and motivic homotopy theory are also outside the scope. Glossary of category theory covers (or will cover) concepts in theory of model categories. See the glossary of symplectic geometry for the topics in symplectic topology such as quantization.
!$@
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Notes
References
Lectures delivered by Michael Hopkins and Notes by Akhil Mathew, Harvard.
(despite the title, it contains a significant amount of general results.)
the 1970 MIT notes
Further reading
José I. Burgos Gil, The Regulators of Beilinson and Borel
Lectures on groups of homotopy spheres by JP Levine
B. I. Dundas, M. Levine, P. A. Østvær, O. Röndigs, and V. Voevodsky. Motivic homotopy theory. Universitext. Springer-Verlag, Berlin, 2007. Lectures from the Summer School held in Nordfjordeid, August 2002.
External links
Algebraic Topology: A guide to literature
Algebraic topology
Algebraic topology
Wikipedia glossaries using description lists | Glossary of algebraic topology | [
"Mathematics"
] | 475 | [
"Fields of abstract algebra",
"Topology",
"Algebraic topology"
] |
48,782,005 | https://en.wikipedia.org/wiki/Astrophysical%20fluid%20dynamics | Astrophysical fluid dynamics is a branch of modern astronomy which deals with the motion of fluids in outer space using fluid mechanics, such as those that make up the Sun and other stars. The subject covers the fundamentals of fluid mechanics using various equations, such as continuity equations, the Navier–Stokes equations, and Euler's equations of collisional fluids. Some of the applications of astrophysical fluid dynamics include dynamics of stellar systems, accretion disks, astrophysical jets, Newtonian fluids, and the fluid dynamics of galaxies.
Introduction
Astrophysical fluid dynamics applies fluid dynamics and its equations to the movement of the fluids in space. The applications are different from regular fluid mechanics in that nearly all calculations take place in a vacuum with zero gravity.
Most of the interstellar medium is not at rest, but is in supersonic motion due to supernova explosions, stellar winds, radiation fields and a time dependent gravitational field caused by spiral density waves in the stellar discs of galaxies. Since supersonic motions almost always involve shock waves, shock waves must be accounted for in calculations. The galaxy also contains a dynamically significant magnetic field, meaning that the dynamics are governed by the equations of compressible magnetohydrodynamics. In many cases, the electrical conductivity is large enough for the ideal MHD equations to be a good approximation, but this is not true in star forming regions where the gas density is high and the degree of ionization is low.
Star formation
An example problem is that of star formation. Stars form out of the interstellar medium, with this formation mostly occurring in giant molecular clouds such as the Rosette Nebula. An interstellar cloud can collapse due to its self-gravity if it is large enough; however, in the ordinary interstellar medium this can only happen if the cloud has a mass of several thousands of solar masses—much larger than that of any star. Stars may still form, however, from processes that occur if the magnetic pressure is much larger than the thermal pressure, which is the case in giant molecular clouds. These processes rely on the interaction of magnetohydrodynamic waves with a thermal instability. A magnetohydrodynamic wave in a medium in which the magnetic pressure is much larger than the thermal pressure can produce dense regions, but they cannot by themselves make the density high enough for self-gravity to act. However, the gas in star forming regions is heated by cosmic rays and is cooled by radiative processes. The net result is that a gas in a thermal equilibrium state in which heating balances cooling can exist in three different phases at the same pressure: a warm phase with a low density, an unstable phase with intermediate density and a cold phase at low temperature. An increase in pressure due to a supernova or a spiral density wave can shift the gas from the warm phase to the unstable phase, with a magnetohydrodynamic wave then being able to produce dense fragments in the cold phase whose self-gravity is strong enough for them to collapse into stars.
Basic concepts
Concepts of fluid dynamics
Many regular fluid dynamics equations are used in astrophysical fluid dynamics. Some of these equations are:
Continuity equations
The Navier–Stokes equations
Euler's equations
Conservation of mass
The continuity equation is an extension of conservation of mass to fluid flow. Consider a fluid flowing through a fixed volume tank having one inlet and one outlet. If the flow is steady (no accumulation of fluid within the tank), then the rate of fluid flow at entry must be equal to the rate of fluid flow at the exit for mass conservation. If, at an entry (or exit) having a cross-sectional area m2, a fluid parcel travels a distance in time , then the volume flow rate ( m3s−1) is given by:
but since is the fluid velocity ( ms−1) we can write:
The mass flow rate ( kgs−1) is given by the product of density and volume flow rate
Because of conservation of mass, between two points in a flowing fluid we can write . This is equivalent to:
If the fluid is incompressible, () then:
This result can be applied to many areas in astrophysical fluid dynamics, such as neutron stars.
References
Further reading
Clarke, C.J. & Carswell, R.F. Principles of Astrophysical Fluid Dynamics, Cambridge University Press (2014)
Introduction to Magnetohydrodynamics by P. A. Davidson, Cambridge University Press
Astrophysics
Fluid dynamics
Astronomical sub-disciplines | Astrophysical fluid dynamics | [
"Physics",
"Chemistry",
"Astronomy",
"Engineering"
] | 905 | [
"Chemical engineering",
"Astrophysics",
"Piping",
"Astronomical sub-disciplines",
"Fluid dynamics"
] |
48,782,942 | https://en.wikipedia.org/wiki/Sebelipase%20alfa | Sebelipase alfa, sold under the brand name Kanuma, is a recombinant form of the enzyme lysosomal acid lipase (LAL) that is used as a medication for the treatment of lysosomal acid lipase deficiency (LAL-D). It is administered via intraveneous infusion. It was approved for medical use in the European Union and in the United States in 2015.
Medical uses
Sebelipase alfa is indicated for long-term enzyme replacement therapy (ERT) in people of all ages with lysosomal acid lipase (LAL) deficiency.
History
Sebelipase was developed by Synageva that became part of Alexion Pharmaceuticals in 2015. For its production, chickens are genetically modified to produce the recombinant form of LAL (rhLAL) in their egg white. After extraction and purification it becomes available as the medication. On 8 December 2015 the FDA announced that its approval came from two centers: The Center for Drug Evaluation and Research (CDER) approved the human therapeutic application of the medication, while the Center for Veterinary Medicine (CVM) approved the application for a recombinant DNA construct in genetically engineered chicken to produce rhLAL in their egg whites. At the time it gained FDA approval Kanuma was the first only drug manufactured in chicken eggs and intended for use in humans.
Sebelipase alfa is an orphan drug; its effectiveness was published after a phase 3 trial in 2015. The disease of LAL affects < 0.2 in 10,000 people in the EU.
References
Drugs developed by AstraZeneca
Biopharmaceuticals
Orphan drugs | Sebelipase alfa | [
"Chemistry",
"Biology"
] | 335 | [
"Pharmacology",
"Biotechnology products",
"Biopharmaceuticals"
] |
48,786,616 | https://en.wikipedia.org/wiki/Mother%27s%20curse | In biology, mother's curse is an evolutionary effect that males inherit deleterious mitochondrial genome (mtDNA) mutations from their mother, while those mutations are beneficial, neutral or less deleterious to females.
As mtDNA is usually maternally inherited, mtDNA mutations deleterious to males but beneficial, neutral or less deleterious to females are not subjected to be selected against, which results in a sex-biased selective sieve. Therefore, male-specific deleterious mtDNA mutations could be maintained and reach a high frequency in populations, decreasing males' fitness and population viability. In addition, the effect of mtDNA mutations on fitness has a threshold effect, i.e. only when the number of mutation reaches the threshold, mtDNA mutations will decrease individual fitness.
Males are more susceptible to mtDNA defects, not only because of lack of selection for mtDNA on males but also due to sperm's higher energy requirements for motility. There are evidence showing mtDNA mutations are more likely to affect males. In humans, Leber's hereditary optic neuropathy (LHON) is caused by one or several point mutations on mtDNA and LHON affects more males than females. In mice, a deletion on mtDNA causes oligospermia and asthenozoospermia, resulting in infertility. Taken together, mtDNA mutations pose a greater threat on males than on females.
Evidence
Mother's curse predicts that mtDNA mutations pose a greater threat on males and male-specific detrimental mutations in mtDNA could be maintained and reach a high frequency. Several researches support these predictions. In humans, a mtDNA haplogroup that exhibits reduction in sperm mobility reaches a frequency of 20%. A 2017 study found the mother's curse preserving a mutation that causes Leber's hereditary optic neuropathy in a population of French Canadians for over 290 years.
In Drosophila melanogaster, mtDNA polymorphism mainly affects nuclear gene expression in males but not in females and those genes are predominantly male-biased. Moreover, Camus et al. constructed 13 D. melanogaster lines with isogenic nuclear genome and different mtDNA haplotypes. They demonstrated that mtDNA polymorphism is responsible for male aging, while there is no significant effect on female longevity. Smith et al. analyzed two different haplotype of mtDNA in hares and found that males of those two haplogroups show variation in their reproductive success. In addition, the mitochondrial genome is associated with sperm viability and length in seed beetles (Callosobruchus maculatus).
How to counteract the effects of male-specific deleterious mtDNA mutations
If mtDNA mutations deleterious to male fitness could not be selected against, they would reach a high frequency despite the high fitness cost for males. Eventually, detrimental mutations would be fixed and lead to species extinction. However, we have not observed extinction in spite of high mutation load of mtDNA. So there must be ways that species could decrease the effects of male-specific deleterious mtDNA mutations.
Paternal leakage – Although mtDNA is thought to be exclusively maternally inherited, paternal inheritance exists in a low frequency of 10−4 relative to maternal inheritance in mice. Hence, selection can act on male-specific deleterious mutations when they are paternally inherited, decreasing their frequency.
Interaction between mtDNA and nuclear genome – A good example is cytoplasmic male sterility (CMS). CMS occurs in many plants. mtDNA mutations are responsible for this phenomenon. However, nuclear restorer genes (Rf or Fr) could infer male fertility. Therefore, interaction between mtDNA and nuclear genes could be one of mechanisms that counteract deleterious effects of male-specific mtDNA mutations
Positive assortative mating and kin selection could also relieve fitness cost of male-specific deleterious mtDNA mutations, while negative assortative mating has an opposite function.
Significance for evolution
Mitochondria play a pivotal role in eukaryotic respiration. Because of maternal inheritance, mtDNA has no selection in males. Instead, mutations only deleterious to males could be maintained and reach a higher frequency by selection or genetic drift in females. As a consequence, asymmetric effects of mtDNA mutations result in sexual conflict. On the other hand, to alleviate the effect of mother's curse, interaction between mtDNA and nuclear genes promotes coevolution of mitochondrial and nuclear genomes.
See also
Sexual conflict
References
Genetics concepts
Evolutionary biology concepts
Mitochondria | Mother's curse | [
"Chemistry",
"Biology"
] | 937 | [
"Mitochondria",
"Genetics concepts",
"Metabolism",
"Evolutionary biology concepts"
] |
48,786,651 | https://en.wikipedia.org/wiki/Communication-avoiding%20algorithm | Communication-avoiding algorithms minimize movement of data within a memory hierarchy for improving its running-time and energy consumption. These minimize the total of two costs (in terms of time and energy): arithmetic and communication. Communication, in this context refers to moving data, either between levels of memory or between multiple processors over a network. It is much more expensive than arithmetic.
Formal theory
Two-level memory model
A common computational model in analyzing communication-avoiding algorithms is the two-level memory model:
There is one processor and two levels of memory.
Level 1 memory is infinitely large. Level 0 memory ("cache") has size .
In the beginning, input resides in level 1. In the end, the output resides in level 1.
Processor can only operate on data in cache.
The goal is to minimize data transfers between the two levels of memory.
Matrix multiplication
Corollary 6.2:
More general results for other numerical linear algebra operations can be found in. The following proof is from.
Motivation
Consider the following running-time model:
Measure of computation = Time per FLOP = γ
Measure of communication = No. of words of data moved = β
⇒ Total running time = γ·(no. of FLOPs) + β·(no. of words)
From the fact that β >> γ as measured in time and energy, communication cost dominates computation cost. Technological trends indicate that the relative cost of communication is increasing on a variety of platforms, from cloud computing to supercomputers to mobile devices. The report also predicts that gap between DRAM access time and FLOPs will increase 100× over coming decade to balance power usage between processors and DRAM.
Energy consumption increases by orders of magnitude as we go higher in the memory hierarchy.
United States president Barack Obama cited communication-avoiding algorithms in the FY 2012 Department of Energy budget request to Congress:
Objectives
Communication-avoiding algorithms are designed with the following objectives:
Reorganize algorithms to reduce communication across all memory hierarchies.
Attain the lower-bound on communication when possible.
The following simple example demonstrates how these are achieved.
Matrix multiplication example
Let A, B and C be square matrices of order n × n. The following naive algorithm implements C = C + A * B:
for i = 1 to n
for j = 1 to n
for k = 1 to n
C(i,j) = C(i,j) + A(i,k) * B(k,j)
Arithmetic cost (time-complexity): n2(2n − 1) for sufficiently large n or O(n3).
Rewriting this algorithm with communication cost labelled at each step
for i = 1 to n
{read row i of A into fast memory} - n2 reads
for j = 1 to n
{read C(i,j) into fast memory} - n2 reads
{read column j of B into fast memory} - n3 reads
for k = 1 to n
C(i,j) = C(i,j) + A(i,k) * B(k,j)
{write C(i,j) back to slow memory} - n2 writes
Fast memory may be defined as the local processor memory (CPU cache) of size M and slow memory may be defined as the DRAM.
Communication cost (reads/writes): n3 + 3n2 or O(n3)
Since total running time = γ·O(n3) + β·O(n3) and β >> γ the communication cost is dominant. The blocked (tiled) matrix multiplication algorithm reduces this dominant term:
Blocked (tiled) matrix multiplication
Consider A, B and C to be n/b-by-n/b matrices of b-by-b sub-blocks where b is called the block size; assume three b-by-b blocks fit in fast memory.
for i = 1 to n/b
for j = 1 to n/b
{read block C(i,j) into fast memory} - b2 × (n/b)2 = n2 reads
for k = 1 to n/b
{read block A(i,k) into fast memory} - b2 × (n/b)3 = n3/b reads
{read block B(k,j) into fast memory} - b2 × (n/b)3 = n3/b reads
C(i,j) = C(i,j) + A(i,k) * B(k,j) - {do a matrix multiply on blocks}
{write block C(i,j) back to slow memory} - b2 × (n/b)2 = n2 writes
Communication cost: 2n3/b + 2n2 reads/writes << 2n3 arithmetic cost
Making b as large possible:
3b2 ≤ M
we achieve the following communication lower bound:
31/2n3/M1/2 + 2n2 or Ω (no. of FLOPs / M1/2)
Previous approaches for reducing communication
Most of the approaches investigated in the past to address this problem rely on scheduling or tuning techniques that aim at overlapping communication with computation. However, this approach can lead to an improvement of at most a factor of two. Ghosting is a different technique for reducing communication, in which a processor stores and computes redundantly data from neighboring processors for future computations. Cache-oblivious algorithms represent a different approach introduced in 1999 for fast Fourier transforms, and then extended to graph algorithms, dynamic programming, etc. They were also applied to several operations in linear algebra as dense LU and QR factorizations. The design of architecture specific algorithms is another approach that can be used for reducing the communication in parallel algorithms, and there are many examples in the literature of algorithms that are adapted to a given communication topology.
See also
Data locality
References
Parallel computing
Algorithms
Optimization algorithms and methods | Communication-avoiding algorithm | [
"Mathematics"
] | 1,207 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic"
] |
48,786,735 | https://en.wikipedia.org/wiki/Microtubule%20plus-end%20tracking%20protein | Microtubule plus-end/positive-end tracking proteins or +TIPs are a type of microtubule associated protein (MAP) which accumulate at the plus ends of microtubules. +TIPs are arranged in diverse groups which are classified based on their structural components; however, all classifications are distinguished by their specific accumulation at the plus end of microtubules and their ability to maintain interactions between themselves and other +TIPs regardless of type. +TIPs can be either membrane bound or cytoplasmic, depending on the type of +TIPs. Most +TIPs track the ends of extending microtubules in a non-autonomous manner.
Regulation of microtubule dynamics
+TIPs' localization at the plus end of microtubules is a highly relevant aspect of microtubule regulation. A +TIP may promote microtubule growth by catalyzing the addition of tubulin at the plus end, or it may balance microtubules at the cell cortex. Many mechanisms of regulation are not fully understood.
In mitosis, +TIPs allocate microtubule addition and promote dynamical regulation at mitotic kinetochores. They also contribute to the extension of endoplasmic reticulum tubules at expanding microtubule ends. Furthermore, +TIPs aid in advocating organization of specialized microtubule arrays (an oft-cited example being the discrete arrangement of bipolar microtubule bundles in fission yeast).
In addition to the basic known functions of +TIPs, the proteins are crucial for the linkages between microtubule ends and other cellular structures. +TIPs can bind microtubule ends to the cell cortex by colliding to plasma membrane-associated proteins or in the case of some +TIPs, directly to the actin fiber. Moreover, +TIP complexes in budding yeast are utilized for myosin-based transport of microtubule ends. Microtubule plus-end trafficking proteins engage in microtubule actin crosstalk, such as the CLIP-170 (+TIP) that controls actin polymerization—a necessity in mammalian phagocytosis.
+TIPs have been known for an extravagant accumulation by the centrosomes and other structural organizing centers of cells. This leads to the basic assumption that +TIPs may aid in microtubule nucleation and anchoring; however, its distinct role at centrosomes still awaits evidential findings. Overall, +TIPs play a critical part in morphogenesis, cell division, and motility.
Classifications of +TIPs based on structural domains
About 20 different families of microtubule plus-end trafficking proteins (+TIPs) have been discovered since the first finding of +TIP CLIP-170 (CLIP1) in 1999. Since then +TIPs have been studied thoroughly and still are. The largest group of +TIPs contain complex and large proteins which have low-complexity sequence areas which are affluent in standard proline and serine residues. These type of proteins share a structural basic Ser-X-lle-Pro (where X can be any amino acid). This certain “code” allows these specific complex proteins to be recognizable to another family of +TIPs, known as the EB proteins. The end-binding proteins (EB proteins), have a precise N-terminal domain which is accountable for microtubule binding. The C-terminus however, sustains an alpha-helical coiled region which regulates parallel dimerization of EB monomers and comprises an acidic tail (attaining EEY/F motif) along with an EB homology domain (EBH). The EBH domain and or the EEY/F motif allow the EB proteins to physically interrelate with an array of +TIP in order to recruit them to microtubule ends.
Other classes of +TIPs include the cytoskeleton-associated proteins which are known for their glycine rich domain and a special conserved hydrophobic cavity which permits them to confer interactions with microtubules and EB proteins.
There is also a class of +TIPs which substantiates a TOG domain. TOG domains mediate tubulin binding and are important for microtubule growth correlated activity. Basically, the brief classification of +TIPs can be made prior to the specific domain and function rudiments of the particular protein; there exist many more +TIPs, but these correspond to the main oriented and highly studied +TIPs.
EB Proteins
EB1 and other proteins
SxlP proteins
APC
MACF
STIM1
TOG proteins
XMAP215
CLASP
Motor proteins
Tea2
MCAK
Dynein HC
Other proteins
Dam1
Lis1
Kar9
Types of +TIPs related to specific functions
Used in Catastrophe:
MCAK
Used in Rescue:
CLIP-170
CLASP
Stabilization:
CLASP
APC
MACF
Polymerization:
EB1
XMAP215
Depolymerization:
MCAK
Communication with cellular components (+TIPs that interact with the specified structure):
Centrosomes
XMAP215
EB1
CLASP
APC
LIS1
FOP
Dynein
Dynactin
CDK5RAP2
Microtubules
Ncd
Klp2
Endoplasmic Reticulum
EB1
STIM1
F-actin
MACF
APC
CLASP
CLIP-170
Kar9
RhoGEF2
p140Cap
Vesicles
Dynein
CLIP-170
Dynactin
Melanophilin
Kinetochores
Dam1
CLASP
CLIP-170
APC
EB1
MCAK
LIS1
Dynein
Dynactin
Cortex of the cell
CLASP
APC
MACF
CLIP-170
EB1
LIS1
Dynein
Dynactin
Expanding the study of +TIPs
Scientists continue to further their understanding of certain mechanisms done by +TIPs and the range of different types of these proteins. Understanding of microtubule plus-end trafficking proteins has greatly expanded since the discovery of CLAP1, and surely will continue to expand as predicted by many researchers and cytologists. Currently, +TIPs may play critical roles in more than just the general aspects known; also with other particular cell structures along with the known structures which are the endoplasmic reticulum, F-actin, vesicles, microtubules, kinetochores, cell cortex, and centrosomes.
See also
Microtubules
Microtubule associated protein
Proteins
Mitotic spindle
Tubulin
Endoplasmic Reticulum
Kinetochores
Centrosomes
F-actin
References
Proteins
Articles containing video clips | Microtubule plus-end tracking protein | [
"Chemistry"
] | 1,320 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
48,788,092 | https://en.wikipedia.org/wiki/Gene%20therapy%20for%20blood%20diseases | Gene therapy for blood diseases is a novel field of research investigating ways in which components of blood can be genetically modified to treat hematologic diseases.
Current clinical applications
CAR-T therapy targeting leukemia
Source:
CAR T-cell therapy is a type of personalized cancer immunotherapy designed to strengthen the patient’s own immune system to better fight cancer. The process begins by extracting T-cells, a type of immune cell, from an individual patient’s blood. The surface of cancer cells contains unique markers called antigens. The patient’s T-cells are genetically modified in laboratories to include chimeric antigen receptors (CARs). The CARs are designed to recognize the specific cancer antigens and bind to them, allowing T-cells to target and attack the cancer cells. The genetically modified T-cells are administered back to the patients as a treatment.
Leukemia is a group of blood cancers commonly found in children younger than 15 and elders older than 55. In 2017, tisagenlecleucel (Kymriah™), the first CAR-T cell therapy approved by the FDA, became available to anyone up to the age of 25 with acute lymphoblastic leukemia (ALL). Until 2022, a total of six CAR-T therapies have been approved by the FDA, all of which target blood cancers. These CAR-T therapies have been shown to have high efficacy in eradicating leukemia cancer, including in patients with advanced-stage, treatment-resistant (refractory) or returned (relapsed) leukemia. They also have a high remission rate in comparison to other traditional cancer treatments.
Hematopoietic stem cell therapy targeting sickle cell disease
Patients in the U.S. suffering from sickle cell disease can now receive targeted gene therapies using hematopoietic stem cells. Hematopoietic stem cells are stem cells which differentiate and give rise to red blood cells, white blood cells and platelets. These therapies involve removing hematopoietic stem cells from the patient and making specific edits to the genome of the hematopoietic stem cells. These edits in the genome of the hematopoietic stem cells are to reverse the effects of sickle cell disease. The cells are then re-administered into the patient. The hematopoietic stem cells are then able to produce red blood cells with the factors which promote proper red blood cell shape reducing the effects of sickle cell disease.
Gene editing therapy for Beta thalassemia
Beta thalassemia is a heritable disorder, characterized by the inability to make beta globin protein, and in turn reduced functioning of hemoglobin (which beta globin is a part of). In December 2023, the European Medicines Agency recommended approval for a cell based gene therapy that works through the CRISPR/Cas9 system. The therapy known as Casgevy works through editing a dysfunctional protein that interferes with creation of adult hemoglobin. This gene is known as the BCL11A, and when people have Beta thalassemia, their bodies do not make enough adult hemoglobin. Casgevy uses precise gene editing of stem cells, and reduces the activity of BCL11A. With the subsequent reduction of adult hemoglobin, fetal hemoglobin (HbF) genes are turned back on, allowing the cells to produce enough hemoglobin. Typically, the body stops making fetal hemoglobin around 6 months of age, and starts making adult hemoglobin. These serve similar functions, however fetal hemoglobin has a higher binding affinity for oxygen than adult hemoglobin, but both are functional at transporting oxygen in the body. Stem cells edited by Casgevy are then transfused back into the body where they can create more HbF and therefore make more functional red blood cells that have this edit. With this therapy, patients who would regularly need blood transfusions can now produce enough hemoglobin for themselves.
Genome editing for HIV resistance
Human immunodeficiency virus (HIV) is a disease that, once contacted, attacks cells that are necessary to fight off infections. It can be transmitted in many different ways, including through sexual contact, blood contamination, the sharing of needles, or from mother to infant. If left untreated, HIV can result in acquired immunodeficiency syndrome (AIDS). HIV weakens an individual’s immune system, leading to increased risk of fatal infections and cancers. In 2023, around 40 million people globally were living with HIV. Despite options available for the treatment and management of HIV (e.g., highly active antiretroviral therapy; HAART), they come with limitations including the need for indefinite daily treatment. Attempts to generate a long-term HIV-resistant immune system have been promising with results from a case report of a patient who developed acute myeloid leukemia after HIV infection. Previously, researchers had found a version of a gene (an allele) that was resistant to HIV. These researchers therefore found a donor who had two copies of this allele (homozygous) and extracted their stem cells in an attempt to produce HIV resistance in the patient with acute myeloid leukemia. After stem cell transplantation from this donor, the patient tested and remained HIV-negative at 20 months post-transplantation and was able to discontinue use of antiviral therapies.
References
Genetic engineering
Blood | Gene therapy for blood diseases | [
"Chemistry",
"Engineering",
"Biology"
] | 1,134 | [
"Biological engineering",
"Bioengineering stubs",
"Biotechnology stubs",
"Genetic engineering",
"Molecular biology"
] |
38,273,081 | https://en.wikipedia.org/wiki/Fluorescence%20image-guided%20surgery | Fluorescence guided surgery (FGS), also called fluorescence image-guided surgery, or in the specific case of tumor resection, fluorescence guided resection, is a medical imaging technique used to detect fluorescently labelled structures during surgery. Similarly to standard image-guided surgery, FGS has the purpose of guiding the surgical procedure and providing the surgeon of real time visualization of the operating field. When compared to other medical imaging modalities, FGS is cheaper and superior in terms of resolution and number of molecules detectable. As a drawback, penetration depth is usually very poor (100 μm) in the visible wavelengths, but it can reach up to 1–2 cm when excitation wavelengths in the near infrared are used.
Imaging devices
FGS is performed using imaging devices with the purpose of providing real time simultaneous information from color reflectance images (bright field) and fluorescence emission. One or more light sources are used to excite and illuminate the sample. Light is collected using optical filters that match the emission spectrum of the fluorophore. Imaging lenses and digital cameras (CCD or CMOS) are used to produce the final image. Live video processing can also be performed to enhance contrast during fluorescence detection and improve signal-to-background ratio. In recent years a number of commercial companies have emerged to offer devices specializing in fluorescence in the NIR wavelengths, with the goal of capitalizing upon the growth in off label use of indocyanine green (ICG). However commercial systems with multiple fluorescence channels also exist commercially, for use with fluorescein and protoporphyrin IX (PpIX).
Excitation sources
Fluorescence excitation is accomplished using various kind of light sources. Halogen lamps have the advantage of delivering high power for a relatively low cost. Using different band-pass filters, the same source can be used to produce several excitation channels from the UV to the near infrared. Light-emitting diodes (LEDs) have become very popular for low cost broad band illumination and narrow band excitation in FGS. Because of their characteristic light emission spectrum, a narrow range of wavelengths that matches the absorption spectrum of a given fluorophore can be selected without using a filter, further reducing the complexity of the optical system. Both halogen lamps and LEDs are suitable for white light illumination of the sample. Excitation can also be performed using laser diodes, particularly when high power over a short wavelength range (typically 5-10 nm) is needed. In this case the system has to account for the limits of exposure to laser radiation.
Detection techniques
Live images from the fluorescent dye and the surgical field are obtained using a combination of filters, lenses and cameras. During open surgery, hand held devices are usually preferred for their ease of use and mobility. A stand or arm can be used to maintain the system on top of the operating field, particularly when the weight and complexity of the device is high (e.g. when multiple cameras are used). The main disadvantage of such devices is that operating theater lights can interfere with the fluorescence emission channel, with a consequent decrease of signal-to-background ratio. This issue is usually solved by dimming or switching off the theater lights during fluorescence detection.
FGS can also be performed using minimally invasive devices such as laparoscopes or endoscopes. In this case, a system of filters, lenses and cameras is attached to the end of the probe. Unlike open surgery, the background from external light sources is reduced. Nevertheless, the excitation power density at the sample is limited by the low light transmission of the fiber optics in endoscopes and laparoscopes, particularly in the near infrared. Moreover, the ability of collecting light is much reduced compared to standard imaging lenses used for open surgery devices.
FGS devices can also be implemented for robotic surgery (for example in the da Vinci Surgical System).
Clinical applications
The major limitation in FGS is the availability of clinically approved fluorescent dyes which have a novel biological indication. Indocyanine green (ICG) has been widely used as a non-specific agent to detect sentinel lymph nodes during surgery. ICG has the main advantage of absorbing and emitting light in the near infrared, allowing detection of nodes under several centimeters of tissue. Methylene blue can also be used for the same purpose, with an excitation peak in the red portion of the spectrum. First clinical applications using tumor-specific agents that detect deposits of ovarian cancer during surgery have been carried out.
History
The first uses of FGS dates back to the 1940s when fluorescein was first used in humans to enhance the imaging of brain tumors, cysts, edema and blood flow in vivo. In modern times the use has fallen off, until a multicenter trial in Germany concluded that FGS to help guide glioma resection based upon fluorescence from PpIX provided significant short-term benefit.
See also
Endoscopy
Fluorescence
Image-guided surgery
Laparoscopy
Near infrared
Near-infrared window in biological tissue
Surgery
References
Biomedical engineering
Surgical procedures and techniques
Fluorescence
Fluorescence techniques
Fluorescent dyes
Medical equipment
Medical imaging | Fluorescence image-guided surgery | [
"Chemistry",
"Engineering",
"Biology"
] | 1,072 | [
"Luminescence",
"Fluorescence",
"Biological engineering",
"Biomedical engineering",
"Medical equipment",
"Fluorescence techniques",
"Medical technology"
] |
38,279,132 | https://en.wikipedia.org/wiki/Convergent%20encryption | Convergent encryption, also known as content hash keying, is a cryptosystem that produces identical ciphertext from identical plaintext files. This has applications in cloud computing to remove duplicate files from storage without the provider having access to the encryption keys. The combination of deduplication and convergent encryption was described in a backup system patent filed by Stac Electronics in 1995. This combination has been used by Farsite, Permabit, Freenet, MojoNation, GNUnet, flud, and the Tahoe Least-Authority File Store.
The system gained additional visibility in 2011 when cloud storage provider Bitcasa announced they were using convergent encryption to enable de-duplication of data in their cloud storage service.
Overview
The system computes a cryptographic hash of the plaintext in question.
The system then encrypts the plaintext by using the hash as a key.
Finally, the hash itself is stored, encrypted with a key chosen by the user.
Known Attacks
Convergent encryption is open to a "confirmation of a file attack" in which an attacker can effectively confirm whether a target possesses a certain file by encrypting an unencrypted, or plain-text, version and then simply comparing the output with files possessed by the target. This attack poses a problem for a user storing information that is non-unique, i.e. also either publicly available or already held by the adversary - for example: banned books or files that cause copyright infringement. An argument could be made that a confirmation of a file attack is rendered less effective by adding a unique piece of data such as a few random characters to the plain text before encryption; this causes the uploaded file to be unique and therefore results in a unique encrypted file. However, some implementations of convergent encryption where the plain-text is broken down into blocks based on file content, and each block then independently convergently encrypted may inadvertently defeat attempts at making the file unique by adding bytes at the beginning or end.
Even more alarming than the confirmation attack is the "learn the remaining information attack" described by Drew Perttula in 2008. This type of attack applies to the encryption of files that are only slight variations of a public document. For example, if the defender encrypts a bank form including a ten digit bank account number, an attacker that is aware of generic bank form format may extract defender's bank account number by producing bank forms for all possible bank account numbers, encrypt them and then by comparing those encryptions with defender's encrypted file deduce the bank account number. Note that this attack can be extended to attack a large number of targets at once (all spelling variations of a target bank customer in the example above, or even all potential bank customers), and the presence of this problem extends to any type of form document: tax returns, financial documents, healthcare forms, employment forms, etc. Also note that there is no known method for decreasing the severity of this attack -- adding a few random bytes to files as they are stored does not help, since those bytes can likewise be attacked with the "learn the remaining information" approach. The only effective approach to mitigating this attack is to encrypt the contents of files with a non-convergent secret before storing (negating any benefit from convergent encryption), or to simply not use convergent encryption in the first place.
See also
Salt (cryptography)
Deterministic encryption
References
Cryptography | Convergent encryption | [
"Mathematics",
"Engineering"
] | 722 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.