text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Gauss notation (also known as a Gauss code or Gauss words [ 1 ] ) is a notation for mathematical knots . [ 2 ] [ 3 ] It is created by enumerating and classifying the crossings of an embedding of the knot in a plane. [ 2 ] [ 4 ] [ 5 ] It is named after the German mathematician Carl Friedrich Gauss (1777–1855).
Gauss code represents a knot with a sequence of integers. However, rather than every crossing being represented by two different numbers, crossings are labelled with only one number. When the crossing is an overcrossing, a positive number is listed. At an undercrossing, a negative number. [ 6 ]
For example, the trefoil knot in Gauss code can be given as: 1,−2,3,−1,2,−3. [ 7 ]
Gauss code is limited in its ability to identify knots by a few problems. The starting point on the knot at which to begin tracing the crossings is arbitrary, and there is no way to determine which direction to trace in. Also, the Gauss code is unable to indicate the handedness of each crossing, which is necessary to identify a knot versus its mirror. For example, the Gauss code for the trefoil knot does not specify if it is the right-handed or left-handed trefoil. [ 8 ]
This last issue is often solved by using the extended Gauss code . In this modification, the positive/negative sign on the second instance of every number is chosen to represent the handedness of that crossing, rather than the over/under sign of the crossing, which is made clear in the first instance of the number. A right-handed crossing is given a positive number, and a left handed crossing is given a negative number. [ 6 ]
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gauss_notation |
Carl Friedrich Gauss , in his treatise Allgemeine Theorie des Erdmagnetismus , [ 1 ] presented a method, the Gauss separation algorithm , of partitioning the magnetic field vector , B ( r , θ , ϕ ) {\displaystyle (r,\theta ,\phi )} , measured over the surface of a sphere into two components, internal and external, arising from electric currents (per the Biot–Savart law ) flowing in the volumes interior and exterior to the spherical surface, respectively. The method employs spherical harmonics . When radial currents flow through the surface of interest, the decomposition is more complex, involving the decomposition of the field into poloidal and toroidal components. In this case, an additional term (the toroidal component) accounts for the contribution of the radial current to the magnetic field on the surface. [ 2 ]
The method is commonly used in studies of terrestrial and planetary magnetism, to relate measurements of magnetic fields either at the planetary surface or in orbit above the planet to currents flowing in the planet's interior (internal currents) and its magnetosphere (external currents). Ionospheric currents would be exterior to the planet's surface, but might be internal currents from the vantage point of a satellite orbiting the planent.
This electromagnetism -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gauss_separation_algorithm |
Gaussian / ˈ ɡ aʊ s . i . ə n / is a general purpose computational chemistry software package initially released in 1970 by John Pople [ 1 ] [ 2 ] and his research group at Carnegie Mellon University as Gaussian 70. [ 3 ] It has been continuously updated since then. [ 4 ] The name originates from Pople's use of Gaussian orbitals to speed up molecular electronic structure calculations as opposed to using Slater-type orbitals , a choice made to improve performance on the limited computing capacities of then-current computer hardware for Hartree–Fock calculations. The current version of the program is Gaussian 16. [ 5 ] Originally available through the Quantum Chemistry Program Exchange, it was later licensed out of Carnegie Mellon University , and since 1987 has been developed and licensed by Gaussian, Inc.
According to the most recent Gaussian manual, the package can do: [ 6 ]
Gaussian 70, Gaussian 76, Gaussian 80, Gaussian 82, Gaussian 86, Gaussian 88, Gaussian 90, Gaussian 92, Gaussian 92/DFT, Gaussian 94, and Gaussian 98, Gaussian 03, Gaussian 09, Gaussian 16.
Other programs named 'Gaussian XX' were placed among the holdings of the Quantum Chemistry Program Exchange . These were unofficial, unverified ports of the program to other computer platforms.
In the past, Gaussian, Inc. has attracted controversy for its licensing terms that stipulate that researchers who develop competing software packages are not permitted to use the software. Some scientists consider these terms overly restrictive. The anonymous group bannedbygaussian.org [ 11 ] has published a list of scientists whom it claims are not permitted to use GAUSSIAN software. These assertions were repeated by Jim Giles in 2004 in Nature . [ 12 ] The controversy was also noted in 1999 by Chemical and Engineering News [ 13 ] [ 14 ] (repeated without additional content in 2004), and in 2000, the World Association of Theoretically Oriented Chemists Scientific Board held a referendum of its executive board members on this issue with a majority (23 of 28) approving the resolution opposing the restrictive licenses. [ 15 ]
Gaussian, Inc. disputes the accuracy of these descriptions of its policy and actions, [ 16 ] noting that all of the listed institutions do in fact have licenses for everyone but directly competing researchers. They also claim that not licensing competitors is standard practice in the software industry and members of the Gaussian collaboration community have been refused licenses from competing institutions. | https://en.wikipedia.org/wiki/Gaussian_(software) |
The Gaussian correlation inequality ( GCI ), formerly known as the Gaussian correlation conjecture ( GCC ), is a mathematical theorem in the fields of mathematical statistics and convex geometry .
The Gaussian correlation inequality states:
Let μ {\displaystyle \mu } be an n -dimensional Gaussian probability measure on R n {\displaystyle \mathbb {R} ^{n}} , i.e. μ {\displaystyle \mu } a multivariate normal distribution , centered at the origin. Then for all convex sets E , F ⊂ R n {\displaystyle E,F\subset \mathbb {R} ^{n}} that are symmetric about the origin ,
As a simple example for n =2, one can think of darts being thrown at a board, with their landing spots in the plane distributed according to a 2-variable normal distribution centered at the origin. (This is a reasonable assumption for any given darts player, with different players being described by different normal distributions.) If we now consider a circle and a rectangle in the plane, both centered at the origin, then the proportion of the darts landing in the intersection of both shapes is no less than the product of the proportions of the darts landing in each shape. This can also be formulated in terms of conditional probabilities : if you're informed that your last dart hit the rectangle, then this information will increase your estimate of the probability that the dart hit the circle.
A special case of the inequality was conjectured in 1955; [ 1 ] further development was given by Olive Jean Dunn in 1958. [ 2 ] [ 3 ] The general case was stated in 1972, also as a conjecture. [ 4 ] The case of dimension n =2 was proved in 1977 [ 5 ] and certain special cases of higher dimension have also been proven in subsequent years. [ 6 ]
The general case of the inequality remained open until 2014, when Thomas Royen , a retired German statistician, proved it using relatively elementary tools. [ 7 ] In fact, Royen generalized the conjecture and proved it for multivariate gamma distributions . The proof did not gain attention when it was published in 2014, due to Royen's relative anonymity and the fact that the proof was published in a predatory journal . [ 2 ] [ 8 ] Another reason was a history of false proofs (by others) and many failed attempts to prove the conjecture, causing skepticism among mathematicians in the field. [ 2 ]
The conjecture, and its solution, came to public attention in 2017, when other mathematicians described Royen's proof in a mainstream publication [ 9 ] and popular media reported on the story. [ 2 ] [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Gaussian_correlation_conjecture |
The Gaussian correlation inequality ( GCI ), formerly known as the Gaussian correlation conjecture ( GCC ), is a mathematical theorem in the fields of mathematical statistics and convex geometry .
The Gaussian correlation inequality states:
Let μ {\displaystyle \mu } be an n -dimensional Gaussian probability measure on R n {\displaystyle \mathbb {R} ^{n}} , i.e. μ {\displaystyle \mu } a multivariate normal distribution , centered at the origin. Then for all convex sets E , F ⊂ R n {\displaystyle E,F\subset \mathbb {R} ^{n}} that are symmetric about the origin ,
As a simple example for n =2, one can think of darts being thrown at a board, with their landing spots in the plane distributed according to a 2-variable normal distribution centered at the origin. (This is a reasonable assumption for any given darts player, with different players being described by different normal distributions.) If we now consider a circle and a rectangle in the plane, both centered at the origin, then the proportion of the darts landing in the intersection of both shapes is no less than the product of the proportions of the darts landing in each shape. This can also be formulated in terms of conditional probabilities : if you're informed that your last dart hit the rectangle, then this information will increase your estimate of the probability that the dart hit the circle.
A special case of the inequality was conjectured in 1955; [ 1 ] further development was given by Olive Jean Dunn in 1958. [ 2 ] [ 3 ] The general case was stated in 1972, also as a conjecture. [ 4 ] The case of dimension n =2 was proved in 1977 [ 5 ] and certain special cases of higher dimension have also been proven in subsequent years. [ 6 ]
The general case of the inequality remained open until 2014, when Thomas Royen , a retired German statistician, proved it using relatively elementary tools. [ 7 ] In fact, Royen generalized the conjecture and proved it for multivariate gamma distributions . The proof did not gain attention when it was published in 2014, due to Royen's relative anonymity and the fact that the proof was published in a predatory journal . [ 2 ] [ 8 ] Another reason was a history of false proofs (by others) and many failed attempts to prove the conjecture, causing skepticism among mathematicians in the field. [ 2 ]
The conjecture, and its solution, came to public attention in 2017, when other mathematicians described Royen's proof in a mainstream publication [ 9 ] and popular media reported on the story. [ 2 ] [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Gaussian_correlation_inequality |
A Gaussian fixed point is a fixed point of the renormalization group flow which is noninteracting in the sense that it is described by a free field theory . [ 1 ] The word Gaussian comes from the fact that the probability distribution is Gaussian at the Gaussian fixed point. This means that Gaussian fixed points are exactly solvable ( trivially solvable in fact). Slight deviations from the Gaussian fixed point can be described by perturbation theory.
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gaussian_fixed_point |
In probability theory and statistical mechanics , the Gaussian free field ( GFF ) is a Gaussian random field , a central model of random surfaces (random height functions).
The discrete version can be defined on any graph , usually a lattice in d -dimensional Euclidean space. The continuum version is defined on R d or on a bounded subdomain of R d . It can be thought of as a natural generalization of one-dimensional Brownian motion to d time (but still one space) dimensions: it is a random (generalized) function from R d to R . In particular, the one-dimensional continuum GFF is just the standard one-dimensional Brownian motion or Brownian bridge on an interval.
In the theory of random surfaces, it is also called the harmonic crystal . It is also the starting point for many constructions in quantum field theory , where it is called the Euclidean bosonic massless free field . A key property of the 2-dimensional GFF is conformal invariance , which relates it in several ways to the Schramm–Loewner evolution , see Sheffield (2005) and Dubédat (2009) .
Similarly to Brownian motion, which is the scaling limit of a wide range of discrete random walk models (see Donsker's theorem ), the continuum GFF is the scaling limit of not only the discrete GFF on lattices, but of many random height function models, such as the height function of uniform random planar domino tilings , see Kenyon (2001) . The planar GFF is also the limit of the fluctuations of the characteristic polynomial of a random matrix model, the Ginibre ensemble, see Rider & Virág (2007) .
The structure of the discrete GFF on any graph is closely related to the behaviour of the simple random walk on the graph . For instance, the discrete GFF plays a key role in the proof by Ding, Lee & Peres (2012) of several conjectures about the cover time of graphs (the expected number of steps it takes for the random walk to visit all the vertices).
Let P ( x , y ) be the transition kernel of the Markov chain given by a random walk on a finite graph G ( V , E ). Let U be a fixed non-empty subset of the vertices V , and take the set of all real-valued functions φ {\displaystyle \varphi } with some prescribed values on U . We then define a Hamiltonian by
Then, the random function with probability density proportional to exp ( − H ( φ ) ) {\displaystyle \exp(-H(\varphi ))} with respect to the Lebesgue measure on R V ∖ U {\displaystyle \mathbb {R} ^{V\setminus U}} is called the discrete GFF with boundary U .
It is not hard to show that the expected value E [ φ ( x ) ] {\displaystyle \mathbb {E} [\varphi (x)]} is the discrete harmonic extension of the boundary values from U (harmonic with respect to the transition kernel P ), and the covariances C o v [ φ ( x ) , φ ( y ) ] {\displaystyle \mathrm {Cov} [\varphi (x),\varphi (y)]} are equal to the discrete Green's function G ( x , y ).
So, in one sentence, the discrete GFF is the Gaussian random field on V with covariance structure given by the Green's function associated to the transition kernel P .
The definition of the continuum field necessarily uses some abstract machinery, since it does not exist as a random height function. Instead, it is a random generalized function, or in other words, a probability distribution on distributions (with two different meanings of the word "distribution").
Given a domain Ω⊆ R n , consider the Dirichlet inner product
for smooth functions ƒ and g on Ω, coinciding with some prescribed boundary function on ∂ Ω {\displaystyle \partial \Omega } , where D f ( x ) {\displaystyle Df\,(x)} is the gradient vector at x ∈ Ω {\displaystyle x\in \Omega } . Then take the Hilbert space closure with respect to this inner product , this is the Sobolev space H 1 ( Ω ) {\displaystyle H^{1}(\Omega )} .
The continuum GFF φ {\displaystyle \varphi } on Ω {\displaystyle \Omega } is a Gaussian random field indexed by H 1 ( Ω ) {\displaystyle H^{1}(\Omega )} , i.e., a collection of Gaussian random variables, one for each f ∈ H 1 ( Ω ) {\displaystyle f\in H^{1}(\Omega )} , denoted by ⟨ φ , f ⟩ {\displaystyle \langle \varphi ,f\rangle } , such that the covariance structure is C o v [ ⟨ φ , f ⟩ , ⟨ φ , g ⟩ ] = ⟨ f , g ⟩ {\displaystyle \mathrm {Cov} [\langle \varphi ,f\rangle ,\langle \varphi ,g\rangle ]=\langle f,g\rangle } for all f , g ∈ H 1 ( Ω ) {\displaystyle f,g\in H^{1}(\Omega )} .
Such a random field indeed exists, and its distribution is unique. Given any orthonormal basis ψ 1 , ψ 2 , … {\displaystyle \psi _{1},\psi _{2},\dots } of H 1 ( Ω ) {\displaystyle H^{1}(\Omega )} (with the given boundary condition), we can form the formal infinite sum
where the ξ k {\displaystyle \xi _{k}} are i.i.d. standard normal variables . This random sum almost surely will not exist as an element of H 1 ( Ω ) {\displaystyle H^{1}(\Omega )} , since if it did then
However, it exists as a random generalized function , since for any f ∈ H 1 ( Ω ) {\displaystyle f\in H^{1}(\Omega )} we have
hence
is a centered Gaussian random variable with finite variance ∑ k c k 2 . {\displaystyle \sum _{k}c_{k}^{2}.}
Although the above argument shows that φ {\displaystyle \varphi } does not exist as a random element of H 1 ( Ω ) {\displaystyle H^{1}(\Omega )} , it still could be that it is a random function on Ω {\displaystyle \Omega } in some larger function space. In fact, in dimension n = 1 {\displaystyle n=1} , an orthonormal basis of H 1 [ 0 , 1 ] {\displaystyle H^{1}[0,1]} is given by
and then φ ( t ) := ∑ k = 1 ∞ ξ k ψ k ( t ) {\displaystyle \varphi (t):=\sum _{k=1}^{\infty }\xi _{k}\psi _{k}(t)} is easily seen to be a one-dimensional Brownian motion (or Brownian bridge, if the boundary values for φ k {\displaystyle \varphi _{k}} are set up that way). So, in this case, it is a random continuous function (not belonging to H 1 [ 0 , 1 ] {\displaystyle H^{1}[0,1]} , however). For instance, if ( φ k ) {\displaystyle (\varphi _{k})} is the Haar basis , then this is Lévy's construction of Brownian motion, see, e.g., Section 3 of Peres (2001) .
On the other hand, for n ≥ 2 {\displaystyle n\geq 2} it can indeed be shown to exist only as a generalized function, see Sheffield (2007) .
In dimension n = 2, the conformal invariance of the continuum GFF is clear from the invariance of the Dirichlet inner product. The corresponding two-dimensional conformal field theory describes a massless free scalar boson . | https://en.wikipedia.org/wiki/Gaussian_free_field |
The Gaussian integral , also known as the Euler–Poisson integral , is the integral of the Gaussian function f ( x ) = e − x 2 {\displaystyle f(x)=e^{-x^{2}}} over the entire real line. Named after the German mathematician Carl Friedrich Gauss , the integral is ∫ − ∞ ∞ e − x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.}
Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809, [ 1 ] attributing its discovery to Laplace . The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution . The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution . In physics this type of integral appears frequently, for example, in quantum mechanics , to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics , to find its partition function .
Although no elementary function exists for the error function, as can be proven by the Risch algorithm , [ 2 ] the Gaussian integral can be solved analytically through the methods of multivariable calculus . That is, there is no elementary indefinite integral for ∫ e − x 2 d x , {\displaystyle \int e^{-x^{2}}\,dx,} but the definite integral ∫ − ∞ ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx} can be evaluated. The definite integral of an arbitrary Gaussian function is ∫ − ∞ ∞ e − a ( x + b ) 2 d x = π a . {\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}}\,dx={\sqrt {\frac {\pi }{a}}}.}
A standard way to compute the Gaussian integral, the idea of which goes back to Poisson, [ 3 ] is to make use of the property that:
( ∫ − ∞ ∞ e − x 2 d x ) 2 = ∫ − ∞ ∞ e − x 2 d x ∫ − ∞ ∞ e − y 2 d y = ∫ − ∞ ∞ ∫ − ∞ ∞ e − ( x 2 + y 2 ) d x d y . {\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\right)^{2}=\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\int _{-\infty }^{\infty }e^{-y^{2}}\,dy=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{-\left(x^{2}+y^{2}\right)}\,dx\,dy.}
Consider the function e − ( x 2 + y 2 ) = e − r 2 {\displaystyle e^{-\left(x^{2}+y^{2}\right)}=e^{-r^{2}}} on the plane R 2 {\displaystyle \mathbb {R} ^{2}} , and compute its integral two ways:
Comparing these two computations yields the integral, though one should take care about the improper integrals involved.
∬ R 2 e − ( x 2 + y 2 ) d x d y = ∫ 0 2 π ∫ 0 ∞ e − r 2 r d r d θ = 2 π ∫ 0 ∞ r e − r 2 d r = 2 π ∫ − ∞ 0 1 2 e s d s s = − r 2 = π ∫ − ∞ 0 e s d s = lim x → − ∞ π ( e 0 − e x ) = π , {\displaystyle {\begin{aligned}\iint _{\mathbb {R} ^{2}}e^{-\left(x^{2}+y^{2}\right)}dx\,dy&=\int _{0}^{2\pi }\int _{0}^{\infty }e^{-r^{2}}r\,dr\,d\theta \\[6pt]&=2\pi \int _{0}^{\infty }re^{-r^{2}}\,dr\\[6pt]&=2\pi \int _{-\infty }^{0}{\tfrac {1}{2}}e^{s}\,ds&&s=-r^{2}\\[6pt]&=\pi \int _{-\infty }^{0}e^{s}\,ds\\[6pt]&=\lim _{x\to -\infty }\pi \left(e^{0}-e^{x}\right)\\[6pt]&=\pi ,\end{aligned}}} where the factor of r is the Jacobian determinant which appears because of the transform to polar coordinates ( r dr dθ is the standard measure on the plane, expressed in polar coordinates Wikibooks:Calculus/Polar Integration#Generalization ), and the substitution involves taking s = − r 2 , so ds = −2 r dr .
Combining these yields ( ∫ − ∞ ∞ e − x 2 d x ) 2 = π , {\displaystyle \left(\int _{-\infty }^{\infty }e^{-x^{2}}\,dx\right)^{2}=\pi ,} so ∫ − ∞ ∞ e − x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.}
To justify the improper double integrals and equating the two expressions, we begin with an approximating function: I ( a ) = ∫ − a a e − x 2 d x . {\displaystyle I(a)=\int _{-a}^{a}e^{-x^{2}}dx.}
If the integral ∫ − ∞ ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx} were absolutely convergent we would have that its Cauchy principal value , that is, the limit lim a → ∞ I ( a ) {\displaystyle \lim _{a\to \infty }I(a)} would coincide with ∫ − ∞ ∞ e − x 2 d x . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx.} To see that this is the case, consider that
∫ − ∞ ∞ | e − x 2 | d x < ∫ − ∞ − 1 − x e − x 2 d x + ∫ − 1 1 e − x 2 d x + ∫ 1 ∞ x e − x 2 d x < ∞ . {\displaystyle \int _{-\infty }^{\infty }\left|e^{-x^{2}}\right|dx<\int _{-\infty }^{-1}-xe^{-x^{2}}\,dx+\int _{-1}^{1}e^{-x^{2}}\,dx+\int _{1}^{\infty }xe^{-x^{2}}\,dx<\infty .}
So we can compute ∫ − ∞ ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx} by just taking the limit lim a → ∞ I ( a ) . {\displaystyle \lim _{a\to \infty }I(a).}
Taking the square of I ( a ) {\displaystyle I(a)} yields
I ( a ) 2 = ( ∫ − a a e − x 2 d x ) ( ∫ − a a e − y 2 d y ) = ∫ − a a ( ∫ − a a e − y 2 d y ) e − x 2 d x = ∫ − a a ∫ − a a e − ( x 2 + y 2 ) d y d x . {\displaystyle {\begin{aligned}I(a)^{2}&=\left(\int _{-a}^{a}e^{-x^{2}}\,dx\right)\left(\int _{-a}^{a}e^{-y^{2}}\,dy\right)\\[6pt]&=\int _{-a}^{a}\left(\int _{-a}^{a}e^{-y^{2}}\,dy\right)\,e^{-x^{2}}\,dx\\[6pt]&=\int _{-a}^{a}\int _{-a}^{a}e^{-\left(x^{2}+y^{2}\right)}\,dy\,dx.\end{aligned}}}
Using Fubini's theorem , the above double integral can be seen as an area integral ∬ [ − a , a ] × [ − a , a ] e − ( x 2 + y 2 ) d ( x , y ) , {\displaystyle \iint _{[-a,a]\times [-a,a]}e^{-\left(x^{2}+y^{2}\right)}\,d(x,y),} taken over a square with vertices {(− a , a ), ( a , a ), ( a , − a ), (− a , − a )} on the xy - plane .
Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's incircle must be less than I ( a ) 2 {\displaystyle I(a)^{2}} , and similarly the integral taken over the square's circumcircle must be greater than I ( a ) 2 {\displaystyle I(a)^{2}} . The integrals over the two disks can easily be computed by switching from Cartesian coordinates to polar coordinates :
x = r cos θ , y = r sin θ {\displaystyle {\begin{aligned}x&=r\cos \theta ,&y&=r\sin \theta \end{aligned}}} J ( r , θ ) = [ ∂ x ∂ r ∂ x ∂ θ ∂ y ∂ r ∂ y ∂ θ ] = [ cos θ − r sin θ sin θ − r cos θ ] {\displaystyle \mathbf {J} (r,\theta )={\begin{bmatrix}{\dfrac {\partial x}{\partial r}}&{\dfrac {\partial x}{\partial \theta }}\\[1em]{\dfrac {\partial y}{\partial r}}&{\dfrac {\partial y}{\partial \theta }}\end{bmatrix}}={\begin{bmatrix}\cos \theta &-r\sin \theta \\\sin \theta &{\hphantom {-}}r\cos \theta \end{bmatrix}}} d ( x , y ) = | J ( r , θ ) | d ( r , θ ) = r d ( r , θ ) . {\displaystyle d(x,y)=\left|J(r,\theta )\right|d(r,\theta )=r\,d(r,\theta ).} ∫ 0 2 π ∫ 0 a r e − r 2 d r d θ < I 2 ( a ) < ∫ 0 2 π ∫ 0 a 2 r e − r 2 d r d θ . {\displaystyle \int _{0}^{2\pi }\int _{0}^{a}re^{-r^{2}}\,dr\,d\theta <I^{2}(a)<\int _{0}^{2\pi }\int _{0}^{a{\sqrt {2}}}re^{-r^{2}}\,dr\,d\theta .}
(See to polar coordinates from Cartesian coordinates for help with polar transformation.)
Integrating, π ( 1 − e − a 2 ) < I 2 ( a ) < π ( 1 − e − 2 a 2 ) . {\displaystyle \pi \left(1-e^{-a^{2}}\right)<I^{2}(a)<\pi \left(1-e^{-2a^{2}}\right).}
By the squeeze theorem , this gives the Gaussian integral ∫ − ∞ ∞ e − x 2 d x = π . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx={\sqrt {\pi }}.}
A different technique, which goes back to Laplace (1812), [ 3 ] is the following. Let y = x s d y = x d s . {\displaystyle {\begin{aligned}y&=xs\\dy&=x\,ds.\end{aligned}}}
Since the limits on s as y → ±∞ depend on the sign of x , it simplifies the calculation to use the fact that e − x 2 is an even function , and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is,
∫ − ∞ ∞ e − x 2 d x = 2 ∫ 0 ∞ e − x 2 d x . {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}\,dx=2\int _{0}^{\infty }e^{-x^{2}}\,dx.}
Thus, over the range of integration, x ≥ 0 , and the variables y and s have the same limits. This yields: I 2 = 4 ∫ 0 ∞ ∫ 0 ∞ e − ( x 2 + y 2 ) d y d x = 4 ∫ 0 ∞ ( ∫ 0 ∞ e − ( x 2 + y 2 ) d y ) d x = 4 ∫ 0 ∞ ( ∫ 0 ∞ e − x 2 ( 1 + s 2 ) x d s ) d x {\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)}dy\,dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-\left(x^{2}+y^{2}\right)}\,dy\right)\,dx\\[6pt]&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x\,ds\right)\,dx\\[6pt]\end{aligned}}} Then, using Fubini's theorem to switch the order of integration : I 2 = 4 ∫ 0 ∞ ( ∫ 0 ∞ e − x 2 ( 1 + s 2 ) x d x ) d s = 4 ∫ 0 ∞ [ e − x 2 ( 1 + s 2 ) − 2 ( 1 + s 2 ) ] x = 0 x = ∞ d s = 4 ( 1 2 ∫ 0 ∞ d s 1 + s 2 ) = 2 arctan ( s ) | 0 ∞ = π . {\displaystyle {\begin{aligned}I^{2}&=4\int _{0}^{\infty }\left(\int _{0}^{\infty }e^{-x^{2}\left(1+s^{2}\right)}x\,dx\right)\,ds\\[6pt]&=4\int _{0}^{\infty }\left[{\frac {e^{-x^{2}\left(1+s^{2}\right)}}{-2\left(1+s^{2}\right)}}\right]_{x=0}^{x=\infty }\,ds\\[6pt]&=4\left({\frac {1}{2}}\int _{0}^{\infty }{\frac {ds}{1+s^{2}}}\right)\\[6pt]&=2\arctan(s){\Big |}_{0}^{\infty }\\[6pt]&=\pi .\end{aligned}}}
Therefore, I = π {\displaystyle I={\sqrt {\pi }}} , as expected.
In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider e − x 2 ≈ 1 − x 2 ≈ ( 1 + x 2 ) − 1 {\displaystyle e^{-x^{2}}\approx 1-x^{2}\approx (1+x^{2})^{-1}} .
In fact, since ( 1 + t ) e − t ≤ 1 {\displaystyle (1+t)e^{-t}\leq 1} for all t {\displaystyle t} , we have the exact bounds: 1 − x 2 ≤ e − x 2 ≤ ( 1 + x 2 ) − 1 {\displaystyle 1-x^{2}\leq e^{-x^{2}}\leq (1+x^{2})^{-1}} Then we can do the bound at Laplace approximation limit: ∫ [ − 1 , 1 ] ( 1 − x 2 ) n d x ≤ ∫ [ − 1 , 1 ] e − n x 2 d x ≤ ∫ [ − 1 , 1 ] ( 1 + x 2 ) − n d x {\displaystyle \int _{[-1,1]}(1-x^{2})^{n}dx\leq \int _{[-1,1]}e^{-nx^{2}}dx\leq \int _{[-1,1]}(1+x^{2})^{-n}dx}
That is, 2 n ∫ [ 0 , 1 ] ( 1 − x 2 ) n d x ≤ ∫ [ − n , n ] e − x 2 d x ≤ 2 n ∫ [ 0 , 1 ] ( 1 + x 2 ) − n d x {\displaystyle 2{\sqrt {n}}\int _{[0,1]}(1-x^{2})^{n}dx\leq \int _{[-{\sqrt {n}},{\sqrt {n}}]}e^{-x^{2}}dx\leq 2{\sqrt {n}}\int _{[0,1]}(1+x^{2})^{-n}dx}
By trigonometric substitution, we exactly compute those two bounds: 2 n ( 2 n ) ! ! / ( 2 n + 1 ) ! ! {\displaystyle 2{\sqrt {n}}(2n)!!/(2n+1)!!} and 2 n ( π / 2 ) ( 2 n − 3 ) ! ! / ( 2 n − 2 ) ! ! {\displaystyle 2{\sqrt {n}}(\pi /2)(2n-3)!!/(2n-2)!!}
By taking the square root of the Wallis formula , π 2 = ∏ n = 1 ( 2 n ) 2 ( 2 n − 1 ) ( 2 n + 1 ) {\displaystyle {\frac {\pi }{2}}=\prod _{n=1}{\frac {(2n)^{2}}{(2n-1)(2n+1)}}} we have π = 2 lim n → ∞ n ( 2 n ) ! ! ( 2 n + 1 ) ! ! {\displaystyle {\sqrt {\pi }}=2\lim _{n\to \infty }{\sqrt {n}}{\frac {(2n)!!}{(2n+1)!!}}} , the desired lower bound limit. Similarly we can get the desired upper bound limit.
Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.
The integrand is an even function ,
∫ − ∞ ∞ e − x 2 d x = 2 ∫ 0 ∞ e − x 2 d x {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }e^{-x^{2}}dx}
Thus, after the change of variable x = t {\textstyle x={\sqrt {t}}} , this turns into the Euler integral
2 ∫ 0 ∞ e − x 2 d x = 2 ∫ 0 ∞ 1 2 e − t t − 1 2 d t = Γ ( 1 2 ) = π {\displaystyle 2\int _{0}^{\infty }e^{-x^{2}}dx=2\int _{0}^{\infty }{\frac {1}{2}}\ e^{-t}\ t^{-{\frac {1}{2}}}dt=\Gamma {\left({\frac {1}{2}}\right)}={\sqrt {\pi }}}
where Γ ( z ) = ∫ 0 ∞ t z − 1 e − t d t {\textstyle \Gamma (z)=\int _{0}^{\infty }t^{z-1}e^{-t}dt} is the gamma function . This shows why the factorial of a half-integer is a rational multiple of π {\textstyle {\sqrt {\pi }}} . More generally, ∫ 0 ∞ x n e − a x b d x = Γ ( ( n + 1 ) / b ) b a ( n + 1 ) / b , {\displaystyle \int _{0}^{\infty }x^{n}e^{-ax^{b}}dx={\frac {\Gamma {\left((n+1)/b\right)}}{ba^{(n+1)/b}}},} which can be obtained by substituting t = a x b {\displaystyle t=ax^{b}} in the integrand of the gamma function to get Γ ( z ) = a z b ∫ 0 ∞ x b z − 1 e − a x b d x {\textstyle \Gamma (z)=a^{z}b\int _{0}^{\infty }x^{bz-1}e^{-ax^{b}}dx} .
The integral of an arbitrary Gaussian function is ∫ − ∞ ∞ e − a ( x + b ) 2 d x = π a . {\displaystyle \int _{-\infty }^{\infty }e^{-a(x+b)^{2}}\,dx={\sqrt {\frac {\pi }{a}}}.}
An alternative form is ∫ − ∞ ∞ e − ( a x 2 + b x + c ) d x = π a e b 2 4 a − c . {\displaystyle \int _{-\infty }^{\infty }e^{-(ax^{2}+bx+c)}\,dx={\sqrt {\frac {\pi }{a}}}\,e^{{\frac {b^{2}}{4a}}-c}.}
This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the log-normal distribution , for example.
∫ − ∞ ∞ e 1 2 i t 2 d t = e i π / 4 2 π {\displaystyle \int _{-\infty }^{\infty }e^{{\frac {1}{2}}it^{2}}dt=e^{i\pi /4}{\sqrt {2\pi }}} and more generally, ∫ R N e 1 2 i x T A x d x = det ( A ) − 1 2 ( e i π / 4 2 π ) N {\displaystyle \int _{\mathbb {R} ^{N}}e^{{\frac {1}{2}}i\mathbf {x} ^{T}A\mathbf {x} }dx=\det(A)^{-{\frac {1}{2}}}{\left(e^{i\pi /4}{\sqrt {2\pi }}\right)}^{N}} for any positive-definite symmetric matrix A {\displaystyle A} .
Suppose A is a symmetric positive-definite (hence invertible) n × n precision matrix , which is the matrix inverse of the covariance matrix . Then,
∫ R n exp ( − 1 2 x T A x ) d n x = ∫ R n exp ( − 1 2 ∑ i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A = 1 det ( A / 2 π ) = det ( 2 π A − 1 ) {\displaystyle {\begin{aligned}\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} \right)}\,d^{n}\mathbf {x} &=\int _{\mathbb {R} ^{n}}\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}\,d^{n}\mathbf {x} \\[1ex]&={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}}={\sqrt {\frac {1}{\det \left(A/2\pi \right)}}}\\[1ex]&={\sqrt {\det \left(2\pi A^{-1}\right)}}\end{aligned}}} By completing the square, this generalizes to ∫ R n exp ( − 1 2 x T A x + b T x + c ) d n x = det ( 2 π A − 1 ) exp ( 1 2 b T A − 1 b + c ) {\displaystyle \int _{\mathbb {R} ^{n}}\exp {\left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} +c\right)}\,d^{n}\mathbf {x} ={\sqrt {\det \left(2\pi A^{-1}\right)}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} +c\right)}
This fact is applied in the study of the multivariate normal distribution .
Also, ∫ x k 1 ⋯ x k 2 N exp ( − 1 2 ∑ i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A 1 2 N N ! ∑ σ ∈ S 2 N ( A − 1 ) k σ ( 1 ) k σ ( 2 ) ⋯ ( A − 1 ) k σ ( 2 N − 1 ) k σ ( 2 N ) {\displaystyle \int x_{k_{1}}\cdots x_{k_{2N}}\,\exp {\left(-{\frac {1}{2}}\sum \limits _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}\,d^{n}x={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\,{\frac {1}{2^{N}N!}}\,\sum _{\sigma \in S_{2N}}(A^{-1})_{k_{\sigma (1)}k_{\sigma (2)}}\cdots (A^{-1})_{k_{\sigma (2N-1)}k_{\sigma (2N)}}} where σ is a permutation of {1, …, 2 N } and the extra factor on the right-hand side is the sum over all combinatorial pairings of {1, …, 2 N } of N copies of A −1 .
Alternatively, [ 4 ]
∫ f ( x ) exp ( − 1 2 ∑ i , j = 1 n A i j x i x j ) d n x = ( 2 π ) n det A exp ( 1 2 ∑ i , j = 1 n ( A − 1 ) i j ∂ ∂ x i ∂ ∂ x j ) f ( x ) | x = 0 {\displaystyle \int f(\mathbf {x} )\exp {\left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}\right)}d^{n}\mathbf {x} ={\sqrt {\frac {{\left(2\pi \right)}^{n}}{\det A}}}\,\left.\exp \left({\frac {1}{2}}\sum _{i,j=1}^{n}\left(A^{-1}\right)_{ij}{\partial \over \partial x_{i}}{\partial \over \partial x_{j}}\right)f(\mathbf {x} )\right|_{\mathbf {x} =0}}
for some analytic function f , provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series .
While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can define a Gaussian functional integral in analogy to the finite-dimensional case. [ citation needed ] There is still the problem, though, that ( 2 π ) ∞ {\displaystyle (2\pi )^{\infty }} is infinite and also, the functional determinant would also be infinite in general. This can be taken care of if we only consider ratios:
∫ f ( x 1 ) ⋯ f ( x 2 N ) exp [ − ∬ 1 2 A ( x 2 N + 1 , x 2 N + 2 ) f ( x 2 N + 1 ) f ( x 2 N + 2 ) d d x 2 N + 1 d d x 2 N + 2 ] D f ∫ exp [ − ∬ 1 2 A ( x 2 N + 1 , x 2 N + 2 ) f ( x 2 N + 1 ) f ( x 2 N + 2 ) d d x 2 N + 1 d d x 2 N + 2 ] D f = 1 2 N N ! ∑ σ ∈ S 2 N A − 1 ( x σ ( 1 ) , x σ ( 2 ) ) ⋯ A − 1 ( x σ ( 2 N − 1 ) , x σ ( 2 N ) ) . {\displaystyle {\begin{aligned}&{\frac {\displaystyle \int f(x_{1})\cdots f(x_{2N})\exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2})\,d^{d}x_{2N+1}\,d^{d}x_{2N+2}}\right]{\mathcal {D}}f}{\displaystyle \int \exp \left[{-\iint {\frac {1}{2}}A(x_{2N+1},x_{2N+2})f(x_{2N+1})f(x_{2N+2})\,d^{d}x_{2N+1}\,d^{d}x_{2N+2}}\right]{\mathcal {D}}f}}\\[6pt]={}&{\frac {1}{2^{N}N!}}\sum _{\sigma \in S_{2N}}A^{-1}(x_{\sigma (1)},x_{\sigma (2)})\cdots A^{-1}(x_{\sigma (2N-1)},x_{\sigma (2N)}).\end{aligned}}}
In the DeWitt notation , the equation looks identical to the finite-dimensional case.
If A is again a symmetric positive-definite matrix, then (assuming all are column vectors) ∫ exp ( − 1 2 ∑ i , j = 1 n A i j x i x j + ∑ i = 1 n b i x i ) d n x = ∫ exp ( − 1 2 x T A x + b T x ) d n x = ( 2 π ) n det A exp ( 1 2 b T A − 1 b ) . {\displaystyle {\begin{aligned}\int \exp \left(-{\frac {1}{2}}\sum _{i,j=1}^{n}A_{ij}x_{i}x_{j}+\sum _{i=1}^{n}b_{i}x_{i}\right)d^{n}\mathbf {x} &=\int \exp \left(-{\tfrac {1}{2}}\mathbf {x} ^{\mathsf {T}}A\mathbf {x} +\mathbf {b} ^{\mathsf {T}}\mathbf {x} \right)d^{n}\mathbf {x} \\&={\sqrt {\frac {(2\pi )^{n}}{\det A}}}\exp \left({\tfrac {1}{2}}\mathbf {b} ^{\mathsf {T}}A^{-1}\mathbf {b} \right).\end{aligned}}}
∫ 0 ∞ x 2 n e − x 2 / a 2 d x = π a 2 n + 1 ( 2 n − 1 ) ! ! 2 n + 1 {\displaystyle \int _{0}^{\infty }x^{2n}e^{-{x^{2}}/{a^{2}}}\,dx={\sqrt {\pi }}{\frac {a^{2n+1}(2n-1)!!}{2^{n+1}}}} ∫ 0 ∞ x 2 n + 1 e − x 2 / a 2 d x = n ! 2 a 2 n + 2 {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-{x^{2}}/{a^{2}}}\,dx={\frac {n!}{2}}a^{2n+2}} ∫ 0 ∞ x 2 n e − b x 2 d x = ( 2 n − 1 ) ! ! b n 2 n + 1 π b {\displaystyle \int _{0}^{\infty }x^{2n}e^{-bx^{2}}\,dx={\frac {(2n-1)!!}{b^{n}2^{n+1}}}{\sqrt {\frac {\pi }{b}}}} ∫ 0 ∞ x 2 n + 1 e − b x 2 d x = n ! 2 b n + 1 {\displaystyle \int _{0}^{\infty }x^{2n+1}e^{-bx^{2}}\,dx={\frac {n!}{2b^{n+1}}}} ∫ 0 ∞ x n e − b x 2 d x = Γ ( n + 1 2 ) 2 b n + 1 2 {\displaystyle \int _{0}^{\infty }x^{n}e^{-bx^{2}}\,dx={\frac {\Gamma ({\frac {n+1}{2}})}{2b^{\frac {n+1}{2}}}}} where n {\displaystyle n} is a positive integer
An easy way to derive these is by differentiating under the integral sign .
∫ − ∞ ∞ x 2 n e − α x 2 d x = ( − 1 ) n ∫ − ∞ ∞ ∂ n ∂ α n e − α x 2 d x = ( − 1 ) n ∂ n ∂ α n ∫ − ∞ ∞ e − α x 2 d x = π ( − 1 ) n ∂ n ∂ α n α − 1 2 = π α ( 2 n − 1 ) ! ! ( 2 α ) n {\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }x^{2n}e^{-\alpha x^{2}}\,dx&=\left(-1\right)^{n}\int _{-\infty }^{\infty }{\frac {\partial ^{n}}{\partial \alpha ^{n}}}e^{-\alpha x^{2}}\,dx\\[1ex]&=\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\int _{-\infty }^{\infty }e^{-\alpha x^{2}}\,dx\\[1ex]&={\sqrt {\pi }}\left(-1\right)^{n}{\frac {\partial ^{n}}{\partial \alpha ^{n}}}\alpha ^{-{\frac {1}{2}}}\\[1ex]&={\sqrt {\frac {\pi }{\alpha }}}{\frac {(2n-1)!!}{\left(2\alpha \right)^{n}}}\end{aligned}}}
One could also integrate by parts and find a recurrence relation to solve this.
Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in n variables may depend only on SL( n ) -invariants of the polynomial. One such invariant is the discriminant ,
zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants. [ 5 ]
Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as formal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is [ citation needed ]
∫ − ∞ ∞ e a x 4 + b x 3 + c x 2 + d x + f d x = 1 2 e f ∑ n , m , p = 0 n + p = 0 mod 2 ∞ b n n ! c m m ! d p p ! Γ ( 3 n + 2 m + p + 1 4 ) ( − a ) 3 n + 2 m + p + 1 4 . {\displaystyle \int _{-\infty }^{\infty }e^{ax^{4}+bx^{3}+cx^{2}+dx+f}\,dx={\frac {1}{2}}e^{f}\sum _{\begin{smallmatrix}n,m,p=0\\n+p=0{\bmod {2}}\end{smallmatrix}}^{\infty }{\frac {b^{n}}{n!}}{\frac {c^{m}}{m!}}{\frac {d^{p}}{p!}}{\frac {\Gamma {\left({\frac {3n+2m+p+1}{4}}\right)}}{{\left(-a\right)}^{\frac {3n+2m+p+1}{4}}}}.}
The n + p = 0 mod 2 requirement is because the integral from −∞ to 0 contributes a factor of (−1) n + p /2 to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory . | https://en.wikipedia.org/wiki/Gaussian_integral |
The Gaussian network model (GNM) is a representation of a biological macromolecule as an elastic mass-and- spring network to study, understand, and characterize the mechanical aspects of its long-time large-scale dynamics . The model has a wide range of applications from small proteins such as enzymes composed of a single domain , to large macromolecular assemblies such as a ribosome or a viral capsid . Protein domain dynamics plays key roles in a multitude of molecular recognition and cell signalling processes.
Protein domains, connected by intrinsically disordered flexible linker domains, induce long-range allostery via protein domain dynamics .
The resultant dynamic modes cannot be generally predicted from static structures of either the entire protein or individual domains.
The Gaussian network model is a minimalist, coarse-grained approach to study biological molecules. In the model, proteins are represented by nodes corresponding to α-carbons of the amino acid residues. Similarly, DNA and RNA structures are represented with one to three nodes for each nucleotide . The model uses the harmonic approximation to model interactions. This coarse-grained representation makes the calculations computationally inexpensive.
At the molecular level, many biological phenomena, such as catalytic activity of an enzyme , occur within the range of nano- to millisecond timescales. All atom simulation techniques, such as molecular dynamics simulations, rarely reach microsecond trajectory length, depending on the size of the system and accessible computational resources. Normal mode analysis in the context of GNM, or elastic network (EN) models in general, provides insights on the longer-scale functional dynamic behaviors of macromolecules. Here, the model captures native state functional motions of a biomolecule at the cost of atomic detail. The inference obtained from this model is complementary to atomic detail simulation techniques.
Another model for protein dynamics based on elastic mass-and-spring networks is the Anisotropic Network Model .
The Gaussian network model was proposed by Bahar, Atilgan, Haliloglu and Erman in 1997. [ 1 ] [ 2 ] The GNM is often analyzed using normal mode analysis, which offers an analytical formulation and unique solution for each structure. The GNM normal mode analysis differs from other normal mode analyses in that it is exclusively based on inter-residue contact topology, influenced by the theory of elasticity of Flory [ 3 ] and the Rouse model [ 4 ] and does not take the three-dimensional directionality of motions into account.
Figure 2 shows a schematic view of elastic network studied in GNM. Metal beads represent the nodes in this Gaussian network (residues of a protein) and springs represent the connections between the nodes (covalent and non-covalent interactions between residues). For nodes i and j , equilibrium position vectors, R 0 i and R 0 j , equilibrium distance vector, R 0 ij , instantaneous fluctuation vectors, ΔR i and ΔR j , and instantaneous distance vector, R ij , are shown in Figure 2. Instantaneous position vectors of these nodes are defined by R i and R j . The difference between equilibrium position vector and instantaneous position vector of residue i gives the instantaneous fluctuation vector, ΔR i = R i - R 0 i . Hence, the instantaneous fluctuation vector between nodes i and j is expressed as ΔR ij = ΔR j - ΔR i = R ij - R 0 ij .
The potential energy of the network in terms of ΔR i is
where γ is a force constant uniform for all springs and Γ ij is the ij th element of the Kirchhoff (or connectivity) matrix of inter-residue contacts, Γ , defined by
r c is a cutoff distance for spatial interactions and taken to be 7 Å for amino acid pairs (represented by their α-carbons).
Expressing the X, Y and Z components of the fluctuation vectors ΔR i as ΔX T = [ΔX 1 ΔX 2 ..... ΔX N ], ΔY T = [ΔY 1 ΔY 2 ..... ΔY N ], and ΔZ T = [ΔZ 1 ΔZ 2 ..... ΔZ N ], above equation simplifies to
In the GNM, the probability distribution of all fluctuations, P ( ΔR ) is isotropic
and Gaussian
where k B is the Boltzmann constant and T is the absolute temperature. p ( ΔY ) and p ( ΔZ ) are expressed similarly.
N-dimensional Gaussian probability density function with random variable vector x , mean vector μ and covariance matrix Σ is
( 2 π ) N | Σ | {\displaystyle {\sqrt {(2\pi )^{N}|\Sigma |}}} normalizes the distribution and |Σ| is the determinant of the covariance matrix.
Similar to Gaussian distribution, normalized distribution for ΔX T = [ΔX 1 ΔX 2 ..... ΔX N ] around the equilibrium positions can be expressed as
The normalization constant, also the partition function Z X , is given by
where k B T γ Γ − 1 {\displaystyle {\frac {k_{B}T}{\gamma }}\Gamma ^{-1}} is the covariance matrix in this case. Z Y and Z Z are expressed similarly. This formulation requires inversion of the Kirchhoff matrix. In the GNM, the determinant of the Kirchhoff matrix is zero, hence calculation of its inverse requires eigenvalue decomposition . Γ −1 is constructed using the N-1 non-zero eigenvalues and associated eigenvectors. Expressions for p ( ΔY ) and p ( ΔZ ) are similar to that of p ( ΔX ). The probability distribution of all fluctuations in GNM becomes
For this mass and spring system, the normalization constant in the preceding expression is the overall GNM partition function, Z GNM ,
The expectation values of residue fluctuations, < ΔR i 2 > (also called mean-square fluctuations, MSFs), and their cross-correlations, < ΔR i · ΔR j > can be organized as the diagonal and off-diagonal terms, respectively, of a covariance matrix. Based on statistical mechanics, the covariance matrix for ΔX is given by
The last equality is obtained by inserting the above p( ΔX ) and taking the (generalized Gaussian) integral. Since,
< ΔR i 2 > and < ΔR i · ΔR j > follows
The GNM normal modes are found by diagonalization of the Kirchhoff matrix, Γ = UΛU T . Here, U is a unitary matrix, U T = U −1 , of the eigenvectors u i of Γ and Λ is the diagonal matrix of eigenvalues λ i . The frequency and shape of a mode is represented by its eigenvalue and eigenvector, respectively. Since the Kirchhoff matrix is positive semi-definite, the first eigenvalue, λ 1 , is zero and the corresponding eigenvector have all its elements equal to 1/ √ N . This shows that the network model translationally invariant.
Cross-correlations between residue fluctuations can be written as a sum over the N-1 nonzero modes as
It follows that, [ ΔR i · ΔR j ], the contribution of an individual mode is expressed as
where [ u k ] i is the i th element of u k .
By definition, a diagonal element of the Kirchhoff matrix, Γ ii , is equal to the degree of a node in GNM that represents the corresponding residue's coordination number. This number is a measure of the local packing density around a given residue. The influence of local packing density can be assessed by series expansion of Γ −1 matrix. Γ can be written as a sum of two matrices, Γ = D + O , containing diagonal elements and off-diagonal elements of Γ .
This expression shows that local packing density makes a significant contribution to expected fluctuations of residues. [ 5 ] The terms that follow inverse of the diagonal matrix, are contributions of positional correlations to expected fluctuations.
Equilibrium fluctuations of biological molecules can be experimentally measured. In X-ray crystallography the B-factor (also called Debye-Waller or temperature factor) of each atom is a measure of its mean-square fluctuation near its equilibrium position in the native structure. In NMR experiments, this measure can be obtained by calculating root-mean-square differences between different models.
In many applications and publications, including the original articles, it has been shown that expected residue fluctuations obtained by the GNM are in good agreement with the experimentally measured native state fluctuations. [ 6 ] [ 7 ] The relation between B-factors, for example, and expected residue fluctuations obtained from GNM is as follows
Figure 3 shows an example of GNM calculation for the catalytic domain of the protein Cdc25B, a cell division cycle dual-specificity phosphatase.
Diagonalization of the Kirchhoff matrix decomposes the conformational motions into a spectrum of collective modes. The expected values of fluctuations and cross-correlations are obtained from linear combinations of fluctuations along these normal modes. The contribution of each mode is scaled with the inverse of that modes frequency. Hence, slow (low frequency) modes contribute most to the expected fluctuations. Along the few slowest modes, motions are shown to be collective and global and potentially relevant to functionality of the biomolecules. Fast (high frequency) modes, on the other hand, describe uncorrelated motions not inducing notable changes in the structure. GNM-based methods do not provide real dynamics but only an approximation based on the combination and interpolation of normal modes. [ 8 ] Their applicability strongly depends on how collective the motion is. [ 8 ] [ 9 ]
There are several major areas in which the Gaussian network model and other elastic network models have proved to be useful. [ 10 ] These include:
In practice, two kinds of calculations can be performed.
The first kind (the GNM per se) makes use of the Kirchhoff matrix . [ 1 ] [ 2 ] The second kind (more specifically called either the Elastic Network Model or the Anisotropic Network Model) makes use of the Hessian matrix associated to the corresponding set of harmonic springs. [ 38 ] Both kinds of models can be used online, using the following servers. | https://en.wikipedia.org/wiki/Gaussian_network_model |
In computational chemistry and molecular physics , Gaussian orbitals (also known as Gaussian type orbitals , GTOs or Gaussians ) are functions used as atomic orbitals in the LCAO method for the representation of electron orbitals in molecules and numerous properties that depend on these. [ 1 ]
The use of Gaussian orbitals in electronic structure theory (instead of the more physical Slater-type orbitals ) was first proposed by Boys [ 2 ] in 1950. The principal reason for the use of Gaussian basis functions in molecular quantum chemical calculations is the 'Gaussian Product Theorem', which guarantees that the product of two GTOs centered on two different atoms is a finite sum of Gaussians centered on a point along the axis connecting them. In this manner, four-center integrals can be reduced to finite sums of two-center integrals, and in a next step to finite sums of one-center integrals. The speedup by 4-5 orders of magnitude compared to Slater orbitals outweighs the extra cost entailed by the larger number of basis functions generally required in a Gaussian calculation.
For reasons of convenience, many quantum chemistry programs work in a basis of Cartesian Gaussians even when spherical Gaussians are requested, as integral evaluation is much easier in the Cartesian basis, and the spherical functions can be simply expressed using the Cartesian functions. [ 3 ] [ 4 ]
The Gaussian basis functions obey the usual radial-angular decomposition
where Y l m ( θ , ϕ ) {\displaystyle Y_{lm}(\theta ,\phi )} is a spherical harmonic , l {\displaystyle l} and m {\displaystyle m} are the angular momentum and its z {\displaystyle z} component, and r , θ , ϕ {\displaystyle r,\theta ,\phi } are spherical coordinates.
While for Slater orbitals the radial part is
A ( l , α ) {\displaystyle A(l,\alpha )} being a normalization constant, for Gaussian primitives the radial part is
where B ( l , α ) {\displaystyle B(l,\alpha )} is the normalization constant corresponding to the Gaussian.
The normalization condition which determines A ( l , α ) {\displaystyle A(l,\alpha )} or B ( l , α ) {\displaystyle B(l,\alpha )} is
which in general does not impose orthogonality in l {\displaystyle l} .
Because an individual primitive Gaussian function gives a rather poor description for the electronic wave function near the nucleus, Gaussian basis sets are almost always contracted:
where c p {\displaystyle c_{p}} is the contraction coefficient for the primitive with exponent α p {\displaystyle \alpha _{p}} . The coefficients are given with respect to normalized primitives, because coefficients for unnormalized primitives would differ by many orders of magnitude. The exponents are reported in atomic units . There is a large library of published Gaussian basis sets optimized for a variety of criteria available at the Basis Set Exchange portal .
In Cartesian coordinates, Gaussian-type orbitals can be written in terms of exponential factors in the x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} directions as well as an exponential factor α {\displaystyle \alpha } controlling the width of the orbital. The expression for a Cartesian Gaussian-type orbital, with the appropriate normalization coefficient is
In the above expression, i {\displaystyle i} , j {\displaystyle j} , and k {\displaystyle k} must be integers. If i + j + k = 0 {\displaystyle i+j+k=0} , then the orbital has spherical symmetry and is considered an s-type GTO. If i + j + k = 1 {\displaystyle i+j+k=1} , the GTO possesses axial symmetry along one axis and is considered a p-type GTO. When i + j + k = 2 {\displaystyle i+j+k=2} , there are six possible GTOs that may be constructed; this is one more than the five canonical d orbital functions for a given angular quantum number. To address this, a linear combination of two d-type GTOs can be used to reproduce a canonical d function. Similarly, there exist 10 f-type GTOs, but only 7 canonical f orbital functions; this pattern continues for higher angular quantum numbers. [ 5 ]
Taketa et al. (1966) presented the necessary mathematical equations for obtaining matrix elements in the Gaussian basis. [ 6 ] Since then much work has been done to speed up the evaluation of these integrals which are the slowest part of many quantum chemical calculations. Živković and Maksić (1968) suggested using Hermite Gaussian functions, [ 7 ] as this simplifies the equations. McMurchie and Davidson (1978) introduced recursion relations, [ 8 ] which greatly reduces the amount of calculations. Pople and Hehre (1978) developed a local coordinate method. [ 9 ] Obara and Saika introduced efficient recursion relations in 1985, [ 10 ] which was followed by the development of other important recurrence relations. Gill and Pople (1990) introduced a 'PRISM' algorithm which allowed efficient use of 20 different calculation paths. [ 11 ]
The POLYATOM System [ 12 ] was the first package for ab initio calculations using Gaussian orbitals that was applied to a wide variety of molecules. [ 13 ] It was developed in Slater's Solid State and Molecular Theory Group (SSMTG) at MIT using the resources of the Cooperative Computing Laboratory. The mathematical infrastructure and operational software were developed by Imre Csizmadia, [ 14 ] Malcolm Harrison, [ 15 ] Jules Moskowitz [ 16 ] and Brian Sutcliffe. [ 17 ] | https://en.wikipedia.org/wiki/Gaussian_orbital |
Gaussian Quantum Monte Carlo is a quantum Monte Carlo method that shows a potential solution to the fermion sign problem without the deficiencies of alternative approaches. Instead of the Hilbert space , this method works in the space of density matrices that can be spanned by an over-complete basis of gaussian operators using only positive coefficients. Containing only quadratic forms of the fermionic operators, no anti-commuting variables occur and any quantum state can be expressed as a real probability distribution. [ 1 ] [ 2 ]
This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gaussian_quantum_Monte_Carlo |
The Gaussian vault is a reinforced masonry construction technique invented by Uruguayan engineer Eladio Dieste to efficiently and economically build thin-shell barrel vaults and wide curved roof spans that are resistant to buckling . [ 1 ] [ 2 ] [ 3 ]
Gaussian vaults consist of a series of interlocking, curved, single-layer brick arches that can span long distances without the need for supporting columns. This allows the construction of lightweight, efficient and visually striking structures. These arches are characterized by the use of a double curvature form, along an inverted catenary , which allows for greater structural efficiency and a reduction in the amount of materials required for building wide-span roof structures.
The term "Gaussian", coined by Dieste himself, typically refers to the bell-shaped curve often used in statistics and probability theory . Dieste's new combination of bricks, steel reinforcement and mortar makes its one of the innovative construction system using reinforced ceramics, also called " cerámica armada " or structural ceramics.
David P. Billington coined the term " structural art " for works of structural engineering that achieve excellence in the three areas of efficiency, economy, and elegance. [ 4 ] [ 5 ] Engineers Gustav Eiffel and Robert Maillart worked with new materials and techniques to design elegant, economic and structurally efficient. Many of them concentrated their designs on one building material like for example wrought-iron and prestressed concrete . Eugene Freyssinet , Felix Candela , Eduardo Torroja pionneered the construction of large thin-shell structures made out of reinforced concrete .
The concept of metal reinforced masonry was not invented by Dieste. in 1889 French engineer Paul Cottancin patented a system of reinforced concrete, which he called "ciment armé" . [ 6 ] The Cottancin system used wire-reinforced hollow bricks acting as a permanent formwork for a cement armature and thin cement shells, as shown in the 1904 Church of Saint-Jean de Montmartre . [ 7 ] Vertical wires ran through the brick voids, while horizontal reinforcement is placed in the joints. The brick voids and joints were filled with cement mortar to prevent metal coming into contact with air. [ 8 ] Cottancin's labor-intensive system was quickly replaced by Hennebique 's reinforced concrete , which requires the erection of wooden formwork but less skilled operators. In 1910, Rafael Guastavino was granted a patent for reinforced brick shells [ 9 ] [ 10 ] and Spanish engineer Torroja also developed in the 1920s their own system of reinforced ceramics. By the 1950's, the construction of thin concrete shells became more and more expensive due to the increased costs of formwork and labor and was progressively replaced by steel construction for long spans vaults. [ 11 ]
Unaware of the developments in the rest of the world, Dieste developed its own system of reinforced masonry, which was little known and used in his day in South America, into a prime example of structural art. [ 11 ] [ 12 ] He innovated in the use of bricks which was affordable and widely available in South America. [ 13 ] He developed many new cost-efficient techniques and elegant forms for the design of thin brick vaults. His construction techniques were derived from structural principles associated with the geometry of the inverted catenary. He gave to the cross-section of his masonry vaults a double curvature to generate stiffness and strength to resist buckling failure. He designed characteristic undulating roofs with a typical span to rise ratio of 10. [ 14 ] [ 15 ]
In 1946, Dieste realized his first reinforced brick vault, working with architect Antoní Bonet i Castellana on the Berlingieri house in Punta Ballena , Uruguay. [ 16 ] [ 17 ] After his invention, Dieste did not use his new construction technique again until 1955. [ 18 ]
In 1956, Dieste founded with Eugenio Montañez (1916–2001) the construction and design firm Dieste y Montañez S.A., which is still in operation today. [ 19 ] With his company, he constructed more than 1.5 million square meters of buildings such as warehouses, factories, gymnasiums and workshops. [ 14 ]
The discovery of this construction system, as well as its development, introduction and implementation, earned the engineer Dieste worldwide recognition from the international community and eventually from UNESCO [ 20 ] [ 21 ] [ 22 ]
Colombian engineer Guillermo González Zuleta and the Spanish engineer Ildefonso Sánchez del Río Pisón also developed different approaches to structural architecture to build large span buildings using ondulating reinforced ceramics. [ 18 ]
The construction technique of this type of reinforced masonry consists of placing steel reinforced bars at the junction of the brick courses. [ 23 ] The behavior of the reinforced brick layer similar to that of a reinforced concrete beam. [ 24 ] [ 25 ] [ 26 ] The thin-shell, single-thickness brick structure derives its rigidity and strength from a double-curved catenary arch form that resists buckling failure. [ 27 ] [ 28 ] [ 29 ] [ 30 ] The structural masonry fulfills a structural function by supporting itself and the roof without beams or columns. [ 31 ] [ 32 ]
This construction system allows the design of thin-shell, single-layer brick structures by combining bricks, iron and mortar, built on a movable "encofrados" used as scaffolding for people and formwork for materials. [ 33 ] These gaussian vaults are structures that are able to withstand the loads placed on them thanks to their shape rather than their mass, resulting in a lower material requirement and in reduced construction times. [ 34 ] The number of layers of bricks in which the reinforcing bar is placed depends on the span to be overcome. The reinforcement must be made of a corrosion-resistant alloy. Dieste used traditional locally-sourced hollow bricks, which are typically 25x25x10 cm. The total thickness of Gaussian vaults are usually between 18 and 25 cm and spanning up to 45 meters. [ 35 ]
Reinforced ceramics have been widely adopted because it allows for greater lightness, prefabrication and systematization in the repetition of its components, with competitive costs. [ 36 ] [ 37 ] [ 38 ] They are particularly suited to the construction of churches, community centers and industrial buildings, as well as other structures that require large open spaces. [ 39 ]
Dieste applied this construction technique to his first architectural work: the church of Christ the Worker and Our Lady of Lourdes (1958–1960), in the small village of Atlántida . [ 40 ] [ 41 ] It became an renowned architectural landmark, described as "a simple rectangle, with side walls rising up in undulating curves to the maximum amplitude of their arcs, these walls supporting a similarly undulating roof, composed of a sequence of reinforced brick Gaussian vaults". [ 42 ] In 2021 the Church was declared a UNESCO World Heritage Site under the name "The work of engineer Eladio Dieste: Church of Atlántida". [ 43 ]
In 1998, Dieste used the same construction techniques in the Church of Saint John of Ávila in a modern neighbourhood of Alcalá de Henares , Spain . [ 44 ] [ 45 ] | https://en.wikipedia.org/wiki/Gaussian_vault |
A Gaussian year is defined as 365.2568983 days . [ 1 ] It was adopted by Carl Friedrich Gauss as the length of the sidereal year in his studies of the dynamics of the Solar System .
A slightly different value is now accepted as the length of the sidereal year, [ 2 ] and the value accepted by Gauss is given a special name.
A particle of negligible mass, that orbits a body of 1 solar mass in this period, has a mean axis for its orbit of 1 astronomical unit by definition. The value is derived from Kepler's third law as
where
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gaussian_year |
The Gausson is a soliton which is the solution of the logarithmic Schrödinger equation , which describes a quantum particle in a possible nonlinear quantum mechanics . The logarithmic Schrödinger equation preserves
the dimensional homogeneity of the equation, i.e. the product of the independent solutions in one dimension remain the solution in multiple dimensions.
While the nonlinearity alone cannot cause the quantum entanglement between dimensions, the logarithmic Schrödinger equation can be solved by the separation of variables . [ 1 ] [ 2 ]
Let the nonlinear Logarithmic Schrödinger equation in one dimension will be given by ( ℏ = 1 {\displaystyle \hbar =1} , unit mass m = 1 {\displaystyle m=1} ):
Let assume the Galilean invariance i.e.
Substituting
The first equation can be written as
Substituting additionally
and assuming
we get the normal Schrödinger equation for the quantum harmonic oscillator :
The solution is therefore the normal ground state of the harmonic oscillator if only ( a > 0 ) {\displaystyle (a>0)}
or
The full solitonic solution is therefore given by
where
This solution describes the soliton moving with the constant velocity and not changing
the shape (modulus) of the Gaussian function . When a potential is added, not only can a single Gausson provide an exact solution to a number of cases of the Logarithmic Schrödinger equation, it has been found that a linear combination of Gaussons can very accurately approximate excited states as well. [ 3 ] This superposition property of Gaussons has been demonstrated for quadratic
potentials. [ 4 ] | https://en.wikipedia.org/wiki/Gausson_(physics) |
In mathematics , the Gauss–Kuzmin–Wirsing operator is the transfer operator of the Gauss map that takes a positive number to the fractional part of its reciprocal. (This is not the same as the Gauss map in differential geometry .) It is named after Carl Gauss , Rodion Kuzmin , and Eduard Wirsing . It occurs in the study of continued fractions ; it is also related to the Riemann zeta function .
The Gauss function (map) h is :
where ⌊ 1 / x ⌋ {\displaystyle \lfloor 1/x\rfloor } denotes the floor function .
It has an infinite number of jump discontinuities at x = 1/ n , for positive integers n . It is hard to approximate it by a single smooth polynomial. [ 1 ]
The Gauss–Kuzmin–Wirsing operator G {\displaystyle G} acts on functions f {\displaystyle f} as
it has the fixed point ρ ( x ) = 1 ln 2 ( 1 + x ) {\displaystyle \rho (x)={\frac {1}{\ln 2(1+x)}}} , unique up to scaling, which is the density of the measure invariant under the Gauss map.
The first eigenfunction of this operator is
which corresponds to an eigenvalue of λ 1 = 1. This eigenfunction gives the probability of the occurrence of a given integer in a continued fraction expansion, and is known as the Gauss–Kuzmin distribution . This follows in part because the Gauss map acts as a truncating shift operator for the continued fractions : if
is the continued fraction representation of a number 0 < x < 1, then
Because h {\displaystyle h} is conjugate to a Bernoulli shift , the eigenvalue λ 1 = 1 {\displaystyle \lambda _{1}=1} is simple, and since the operator leaves invariant the Gauss–Kuzmin measure, the operator is ergodic with respect to the measure. This fact allows a short proof of the existence of Khinchin's constant .
Additional eigenvalues can be computed numerically; the next eigenvalue is λ 2 = −0.3036630029... (sequence A038517 in the OEIS )
and its absolute value is known as the Gauss–Kuzmin–Wirsing constant . Analytic forms for additional eigenfunctions are not known. It is not known if the eigenvalues are irrational .
Let us arrange the eigenvalues of the Gauss–Kuzmin–Wirsing operator according to an absolute value:
It was conjectured in 1995 by Philippe Flajolet and Brigitte Vallée that
In 2018, Giedrius Alkauskas gave a convincing argument that this conjecture can be refined to a much stronger statement: [ 2 ]
here the function d ( n ) {\displaystyle d(n)} is bounded, and ζ ( ⋆ ) {\displaystyle \zeta (\star )} is the Riemann zeta function .
The eigenvalues form a discrete spectrum, when the operator is limited to act on functions on the unit interval of the real number line. More broadly, since the Gauss map is the shift operator on Baire space N ω {\displaystyle \mathbb {N} ^{\omega }} , the GKW operator can also be viewed as an operator on the function space N ω → C {\displaystyle \mathbb {N} ^{\omega }\to \mathbb {C} } (considered as a Banach space , with basis functions taken to be the indicator functions on the cylinders of the product topology ). In the later case, it has a continuous spectrum, with eigenvalues in the unit disk | λ | < 1 {\displaystyle |\lambda |<1} of the complex plane. That is, given the cylinder C n [ b ] = { ( a 1 , a 2 , ⋯ ) ∈ N ω : a n = b } {\displaystyle C_{n}[b]=\{(a_{1},a_{2},\cdots )\in \mathbb {N} ^{\omega }:a_{n}=b\}} , the operator G shifts it to the left: G C n [ b ] = C n − 1 [ b ] {\displaystyle GC_{n}[b]=C_{n-1}[b]} . Taking r n , b ( x ) {\displaystyle r_{n,b}(x)} to be the indicator function which is 1 on the cylinder (when x ∈ C n [ b ] {\displaystyle x\in C_{n}[b]} ), and zero otherwise, one has that G r n , b = r n − 1 , b {\displaystyle Gr_{n,b}=r_{n-1,b}} . The series
then is an eigenfunction with eigenvalue λ {\displaystyle \lambda } . That is, one has [ G f ] ( x ) = λ f ( x ) {\displaystyle [Gf](x)=\lambda f(x)} whenever the summation converges: that is, when | λ | < 1 {\displaystyle |\lambda |<1} .
A special case arises when one wishes to consider the Haar measure of the shift operator, that is, a function that is invariant under shifts. This is given by the Minkowski measure ? ′ {\displaystyle ?^{\prime }} . That is, one has that G ? ′ = ? ′ {\displaystyle G?^{\prime }=?^{\prime }} . [ 3 ]
The Gauss map is in fact much more than ergodic: it is exponentially mixing, [ 4 ] [ 5 ] but the proof is not elementary.
The Gauss map, over the Gauss measure, has entropy π 2 6 ln 2 {\displaystyle {\frac {\pi ^{2}}{6\ln 2}}} . This can be proved by the Rokhlin formula for entropy. Then using the Shannon–McMillan–Breiman theorem , with its equipartition property, we obtain Lochs' theorem . [ 6 ]
A covering family C {\displaystyle {\mathcal {C}}} is a set of measurable sets, such that any open set is a disjoint union of sets in it. Compare this with base in topology , which is less restrictive as it allows non-disjoint unions.
Knopp's lemma. Let B ⊂ [ 0 , 1 ) {\displaystyle B\subset [0,1)} be measurable, let C {\displaystyle {\mathcal {C}}} be a covering family and suppose that ∃ γ > 0 , ∀ A ∈ C , μ ( A ∩ B ) ≥ γ μ ( A ) {\displaystyle \exists \gamma >0,\forall A\in {\mathcal {C}},\mu (A\cap B)\geq \gamma \mu (A)} . Then μ ( B ) = 1 {\displaystyle \mu (B)=1} .
Proof. Since any open set is a disjoint union of sets in C {\displaystyle {\mathcal {C}}} , we have μ ( A ∩ B ) ≥ γ μ ( A ) {\displaystyle \mu (A\cap B)\geq \gamma \mu (A)} for any open set A {\displaystyle A} , not just any set in C {\displaystyle {\mathcal {C}}} .
Take the complement B c {\displaystyle B^{c}} . Since the Lebesgue measure is outer regular , we can take an open set B ′ {\displaystyle B'} that is close to B c {\displaystyle B^{c}} , meaning the symmetric difference has arbitrarily small measure μ ( B ′ Δ B c ) < ϵ {\displaystyle \mu (B'\Delta B^{c})<\epsilon } .
At the limit, μ ( B ′ ∩ B ) ≥ γ μ ( B ′ ) {\displaystyle \mu (B'\cap B)\geq \gamma \mu (B')} becomes have 0 ≥ γ μ ( B c ) {\displaystyle 0\geq \gamma \mu (B^{c})} .
Fix a sequence a 1 , … , a n {\displaystyle a_{1},\dots ,a_{n}} of positive integers. Let q n p n = [ 0 ; a 1 , … , a n ] {\displaystyle {\frac {q_{n}}{p_{n}}}=[0;a_{1},\dots ,a_{n}]} . Let the interval Δ n {\displaystyle \Delta _{n}} be the open interval with end-points [ 0 ; a 1 , … , a n ] , [ 0 ; a 1 , … , a n + 1 ] {\displaystyle [0;a_{1},\dots ,a_{n}],[0;a_{1},\dots ,a_{n}+1]} .
Lemma. For any open interval ( a , b ) ⊂ ( 0 , 1 ) {\displaystyle (a,b)\subset (0,1)} , we have μ ( T − n ( a , b ) ∩ Δ n ) = μ ( ( a , b ) ) μ ( Δ n ) ( q n ( q n + q n − 1 ) ( q n + q n − 1 b ) ( q n + q n − 1 a ) ) ⏟ ≥ 1 / 2 {\displaystyle \mu (T^{-n}(a,b)\cap \Delta _{n})=\mu ((a,b))\mu (\Delta _{n})\underbrace {\left({\frac {q_{n}(q_{n}+q_{n-1})}{(q_{n}+q_{n-1}b)(q_{n}+q_{n-1}a)}}\right)} _{\geq 1/2}} Proof. For any t ∈ ( 0 , 1 ) {\displaystyle t\in (0,1)} we have [ 0 ; a 1 , … , a n + t ] = q n + q n − 1 t p n + p n − 1 t {\displaystyle [0;a_{1},\dots ,a_{n}+t]={\frac {q_{n}+q_{n-1}t}{p_{n}+p_{n-1}t}}} by standard continued fraction theory . By expanding the definition, T − n ( a , b ) ∩ Δ n {\displaystyle T^{-n}(a,b)\cap \Delta _{n}} is an interval with end points [ 0 ; a 1 , … , a n + a ] , [ 0 ; a 1 , … , a n + b ] {\displaystyle [0;a_{1},\dots ,a_{n}+a],[0;a_{1},\dots ,a_{n}+b]} . Now compute directly. To show the fraction is ≥ 1 / 2 {\displaystyle \geq 1/2} , use the fact that q n ≥ q n − 1 {\displaystyle q_{n}\geq q_{n-1}} .
Theorem. The Gauss map is ergodic.
Proof. Consider the set of all open intervals in the form ( [ 0 ; a 1 , … , a n ] , [ 0 ; a 1 , … , a n + 1 ] ) {\displaystyle ([0;a_{1},\dots ,a_{n}],[0;a_{1},\dots ,a_{n}+1])} . Collect them into a single family C {\displaystyle {\mathcal {C}}} . This C {\displaystyle {\mathcal {C}}} is a covering family, because any open interval ( a , b ) ∖ Q {\displaystyle (a,b)\setminus \mathbb {Q} } where a , b {\displaystyle a,b} are rational, is a disjoint union of finitely many sets in C {\displaystyle {\mathcal {C}}} .
Suppose a set B {\displaystyle B} is T {\displaystyle T} -invariant and has positive measure. Pick any Δ n ∈ C {\displaystyle \Delta _{n}\in {\mathcal {C}}} . Since Lebesgue measure is outer regular, there exists an open set B 0 {\displaystyle B_{0}} which differs from B {\displaystyle B} by only μ ( B 0 Δ B ) < ϵ {\displaystyle \mu (B_{0}\Delta B)<\epsilon } . Since B {\displaystyle B} is T {\displaystyle T} -invariant, we also have μ ( T − n B 0 Δ B ) = μ ( B 0 Δ B ) < ϵ {\displaystyle \mu (T^{-n}B_{0}\Delta B)=\mu (B_{0}\Delta B)<\epsilon } . Therefore, μ ( T − n B 0 ∩ Δ n ) ∈ μ ( B ∩ Δ n ) ± ϵ {\displaystyle \mu (T^{-n}B_{0}\cap \Delta _{n})\in \mu (B\cap \Delta _{n})\pm \epsilon } By the previous lemma, we have μ ( T − n B 0 ∩ Δ n ) ≥ 1 2 μ ( B 0 ) μ ( Δ n ) ∈ 1 2 ( μ ( B ) ± ϵ ) μ ( Δ n ) {\displaystyle \mu (T^{-n}B_{0}\cap \Delta _{n})\geq {\frac {1}{2}}\mu (B_{0})\mu (\Delta _{n})\in {\frac {1}{2}}(\mu (B)\pm \epsilon )\mu (\Delta _{n})} Take the ϵ → 0 {\displaystyle \epsilon \to 0} limit, we have μ ( B ∩ Δ n ) ≥ 1 2 μ ( B ) μ ( Δ n ) {\displaystyle \mu (B\cap \Delta _{n})\geq {\frac {1}{2}}\mu (B)\mu (\Delta _{n})} . By Knopp's lemma, it has full measure.
The GKW operator is related to the Riemann zeta function . Note that the zeta function can be written as
which implies that
by change-of-variable.
Consider the Taylor series expansions at x = 1 for a function f ( x ) and g ( x ) = [ G f ] ( x ) {\displaystyle g(x)=[Gf](x)} . That is, let
and write likewise for g ( x ). The expansion is made about x = 1 because the GKW operator is poorly behaved at x = 0. The expansion is made about 1 − x so that we can keep x a positive number, 0 ≤ x ≤ 1. Then the GKW operator acts on the Taylor coefficients as
where the matrix elements of the GKW operator are given by
This operator is extremely well formed, and thus very numerically tractable. The Gauss–Kuzmin constant is easily computed to high precision by numerically diagonalizing the upper-left n by n portion. There is no known closed-form expression that diagonalizes this operator; that is, there are no closed-form expressions known for the eigenvectors.
The Riemann zeta can be written as
where the t n {\displaystyle t_{n}} are given by the matrix elements above:
Performing the summations, one gets:
where γ {\displaystyle \gamma } is the Euler–Mascheroni constant . These t n {\displaystyle t_{n}} play the analog of the Stieltjes constants , but for the falling factorial expansion. By writing
one gets: a 0 = −0.0772156... and a 1 = −0.00474863... and so on. The values get small quickly but are oscillatory. Some explicit sums on these values can be performed. They can be explicitly related to the Stieltjes constants by re-expressing the falling factorial as a polynomial with Stirling number coefficients, and then solving. More generally, the Riemann zeta can be re-expressed as an expansion in terms of Sheffer sequences of polynomials.
This expansion of the Riemann zeta is investigated in the following references. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] The coefficients are decreasing as | https://en.wikipedia.org/wiki/Gauss–Kuzmin–Wirsing_operator |
In complex analysis , a branch of mathematics, the Gauss–Lucas theorem gives a geometric relation between the roots of a polynomial P and the roots of its derivative P' . The set of roots of a real or complex polynomial is a set of points in the complex plane . The theorem states that the roots of P' all lie within the convex hull of the roots of P , that is the smallest convex polygon containing the roots of P . When P has a single root then this convex hull is a single point and when the roots lie on a line then the convex hull is a segment of this line. The Gauss–Lucas theorem, named after Carl Friedrich Gauss and Félix Lucas, is similar in spirit to Rolle's theorem .
If P is a (nonconstant) polynomial with complex coefficients, all zeros of P' belong to the convex hull of the set of zeros of P . [ 1 ]
It is easy to see that if P ( x ) = a x 2 + b x + c {\displaystyle P(x)=ax^{2}+bx+c} is a second degree polynomial , the zero of P ′ ( x ) = 2 a x + b {\displaystyle P'(x)=2ax+b} is the average of the roots of P . In that case, the convex hull is the line segment with the two roots as endpoints and it is clear that the average of the roots is the middle point of the segment.
For a third degree complex polynomial P ( cubic function ) with three distinct zeros, Marden's theorem states that the zeros of P' are the foci of the Steiner inellipse which is the unique ellipse tangent to the midpoints of the triangle formed by the zeros of P .
For a fourth degree complex polynomial P ( quartic function ) with four distinct zeros forming a concave quadrilateral , one of the zeros of P lies within the convex hull of the other three; all three zeros of P' lie in two of the three triangles formed by the interior zero of P and two others zeros of P . [ 2 ]
In addition, if a polynomial of degree n of real coefficients has n distinct real zeros x 1 < x 2 < ⋯ < x n , {\displaystyle x_{1}<x_{2}<\cdots <x_{n},} we see, using Rolle's theorem , that the zeros of the derivative polynomial are in the interval [ x 1 , x n ] {\displaystyle [x_{1},x_{n}]} which is the convex hull of the set of roots.
The convex hull of the roots of the polynomial
particularly includes the point
By the fundamental theorem of algebra , P {\displaystyle P} is a product of linear factors as
where the complex numbers a 1 , a 2 , … , a n {\displaystyle a_{1},a_{2},\ldots ,a_{n}} are the – not necessarily distinct – zeros of the polynomial P , the complex number α is the leading coefficient of P and n is the degree of P .
For any root z {\displaystyle z} of P ′ {\displaystyle P'} , if it is also a root of P {\displaystyle P} , then the theorem is trivially true. Otherwise, we have for the logarithmic derivative
Hence
Taking their conjugates, and dividing, we obtain z {\displaystyle z} as a convex sum of the roots of P {\displaystyle P} : | https://en.wikipedia.org/wiki/Gauss–Lucas_theorem |
In statistics , the Gauss–Markov theorem (or simply Gauss theorem for some authors) [ 1 ] states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators , if the errors in the linear regression model are uncorrelated , have equal variances and expectation value of zero. [ 2 ] The errors do not need to be normal , nor do they need to be independent and identically distributed (only uncorrelated with mean zero and homoscedastic with finite variance). The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator (which also drops linearity), ridge regression , or simply any degenerate estimator.
The theorem was named after Carl Friedrich Gauss and Andrey Markov , although Gauss' work significantly predates Markov's. [ 3 ] But while Gauss derived the result under the assumption of independence and normality, Markov reduced the assumptions to the form stated above. [ 4 ] A further generalization to non-spherical errors was given by Alexander Aitken . [ 5 ]
Suppose we are given two random variable vectors, X , Y ∈ R k {\displaystyle X{\text{, }}Y\in \mathbb {R} ^{k}} and that we want to find the best linear estimator of Y {\displaystyle Y} given X {\displaystyle X} , using the best linear estimator Y ^ = α X + μ {\displaystyle {\hat {Y}}=\alpha X+\mu } Where the parameters α {\displaystyle \alpha } and μ {\displaystyle \mu } are both real numbers.
Such an estimator Y ^ {\displaystyle {\hat {Y}}} would have the same mean and standard deviation as Y {\displaystyle Y} , that is, μ Y ^ = μ Y , σ Y ^ = σ Y {\displaystyle \mu _{\hat {Y}}=\mu _{Y},\sigma _{\hat {Y}}=\sigma _{Y}} .
Therefore, if the vector X {\displaystyle X} has respective mean and standard deviation μ x , σ x {\displaystyle \mu _{x},\sigma _{x}} , the best linear estimator would be
Y ^ = σ y ( X − μ x ) σ x + μ y {\displaystyle {\hat {Y}}=\sigma _{y}{\frac {(X-\mu _{x})}{\sigma _{x}}}+\mu _{y}}
since Y ^ {\displaystyle {\hat {Y}}} has the same mean and standard deviation as Y {\displaystyle Y} .
Suppose we have, in matrix notation, the linear relationship
expanding to,
where β j {\displaystyle \beta _{j}} are non-random but un observable parameters, X i j {\displaystyle X_{ij}} are non-random and observable (called the "explanatory variables"), ε i {\displaystyle \varepsilon _{i}} are random, and so y i {\displaystyle y_{i}} are random. The random variables ε i {\displaystyle \varepsilon _{i}} are called the "disturbance", "noise" or simply "error" (will be contrasted with "residual" later in the article; see errors and residuals in statistics ). Note that to include a constant in the model above, one can choose to introduce the constant as a variable β K + 1 {\displaystyle \beta _{K+1}} with a newly introduced last column of X being unity i.e., X i ( K + 1 ) = 1 {\displaystyle X_{i(K+1)}=1} for all i {\displaystyle i} . Note that though y i , {\displaystyle y_{i},} as sample responses, are observable, the following statements and arguments including assumptions, proofs and the others assume under the only condition of knowing X i j , {\displaystyle X_{ij},} but not y i . {\displaystyle y_{i}.}
The Gauss–Markov assumptions concern the set of error random variables, ε i {\displaystyle \varepsilon _{i}} :
A linear estimator of β j {\displaystyle \beta _{j}} is a linear combination
in which the coefficients c i j {\displaystyle c_{ij}} are not allowed to depend on the underlying coefficients β j {\displaystyle \beta _{j}} , since those are not observable, but are allowed to depend on the values X i j {\displaystyle X_{ij}} , since these data are observable. (The dependence of the coefficients on each X i j {\displaystyle X_{ij}} is typically nonlinear; the estimator is linear in each y i {\displaystyle y_{i}} and hence in each random ε , {\displaystyle \varepsilon ,} which is why this is "linear" regression .) The estimator is said to be unbiased if and only if
regardless of the values of X i j {\displaystyle X_{ij}} . Now, let ∑ j = 1 K λ j β j {\textstyle \sum _{j=1}^{K}\lambda _{j}\beta _{j}} be some linear combination of the coefficients. Then the mean squared error of the corresponding estimation is
in other words, it is the expectation of the square of the weighted sum (across parameters) of the differences between the estimators and the corresponding parameters to be estimated. (Since we are considering the case in which all the parameter estimates are unbiased, this mean squared error is the same as the variance of the linear combination.) The best linear unbiased estimator (BLUE) of the vector β {\displaystyle \beta } of parameters β j {\displaystyle \beta _{j}} is one with the smallest mean squared error for every vector λ {\displaystyle \lambda } of linear combination parameters. This is equivalent to the condition that
is a positive semi-definite matrix for every other linear unbiased estimator β ~ {\displaystyle {\widetilde {\beta }}} .
The ordinary least squares estimator (OLS) is the function
of y {\displaystyle y} and X {\displaystyle X} (where X T {\displaystyle X^{\operatorname {T} }} denotes the transpose of X {\displaystyle X} ) that minimizes the sum of squares of residuals (misprediction amounts):
The theorem now states that the OLS estimator is a best linear unbiased estimator (BLUE).
The main idea of the proof is that the least-squares estimator is uncorrelated with every linear unbiased estimator of zero, i.e., with every linear combination a 1 y 1 + ⋯ + a n y n {\displaystyle a_{1}y_{1}+\cdots +a_{n}y_{n}} whose coefficients do not depend upon the unobservable β {\displaystyle \beta } but whose expected value is always zero.
Proof that the OLS indeed minimizes the sum of squares of residuals may proceed as follows with a calculation of the Hessian matrix and showing that it is positive definite.
The MSE function we want to minimize is f ( β 0 , β 1 , … , β p ) = ∑ i = 1 n ( y i − β 0 − β 1 x i 1 − ⋯ − β p x i p ) 2 {\displaystyle f(\beta _{0},\beta _{1},\dots ,\beta _{p})=\sum _{i=1}^{n}(y_{i}-\beta _{0}-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})^{2}} for a multiple regression model with p variables. The first derivative is d d β f = − 2 X T ( y − X β ) = − 2 [ ∑ i = 1 n ( y i − ⋯ − β p x i p ) ∑ i = 1 n x i 1 ( y i − ⋯ − β p x i p ) ⋮ ∑ i = 1 n x i p ( y i − ⋯ − β p x i p ) ] = 0 p + 1 , {\displaystyle {\begin{aligned}{\frac {d}{d{\boldsymbol {\beta }}}}f&=-2X^{\operatorname {T} }\left(\mathbf {y} -X{\boldsymbol {\beta }}\right)\\&=-2{\begin{bmatrix}\sum _{i=1}^{n}(y_{i}-\dots -\beta _{p}x_{ip})\\\sum _{i=1}^{n}x_{i1}(y_{i}-\dots -\beta _{p}x_{ip})\\\vdots \\\sum _{i=1}^{n}x_{ip}(y_{i}-\dots -\beta _{p}x_{ip})\end{bmatrix}}\\&=\mathbf {0} _{p+1},\end{aligned}}} where X T {\displaystyle X^{\operatorname {T} }} is the design matrix X = [ 1 x 11 ⋯ x 1 p 1 x 21 ⋯ x 2 p ⋮ 1 x n 1 ⋯ x n p ] ∈ R n × ( p + 1 ) ; n ≥ p + 1 {\displaystyle X={\begin{bmatrix}1&x_{11}&\cdots &x_{1p}\\1&x_{21}&\cdots &x_{2p}\\&&\vdots \\1&x_{n1}&\cdots &x_{np}\end{bmatrix}}\in \mathbb {R} ^{n\times (p+1)};\qquad n\geq p+1}
The Hessian matrix of second derivatives is H = 2 [ n ∑ i = 1 n x i 1 ⋯ ∑ i = 1 n x i p ∑ i = 1 n x i 1 ∑ i = 1 n x i 1 2 ⋯ ∑ i = 1 n x i 1 x i p ⋮ ⋮ ⋱ ⋮ ∑ i = 1 n x i p ∑ i = 1 n x i p x i 1 ⋯ ∑ i = 1 n x i p 2 ] = 2 X T X {\displaystyle {\mathcal {H}}=2{\begin{bmatrix}n&\sum _{i=1}^{n}x_{i1}&\cdots &\sum _{i=1}^{n}x_{ip}\\\sum _{i=1}^{n}x_{i1}&\sum _{i=1}^{n}x_{i1}^{2}&\cdots &\sum _{i=1}^{n}x_{i1}x_{ip}\\\vdots &\vdots &\ddots &\vdots \\\sum _{i=1}^{n}x_{ip}&\sum _{i=1}^{n}x_{ip}x_{i1}&\cdots &\sum _{i=1}^{n}x_{ip}^{2}\end{bmatrix}}=2X^{\operatorname {T} }X}
Assuming the columns of X {\displaystyle X} are linearly independent so that X T X {\displaystyle X^{\operatorname {T} }X} is invertible, let X = [ v 1 v 2 ⋯ v p + 1 ] {\displaystyle X={\begin{bmatrix}\mathbf {v_{1}} &\mathbf {v_{2}} &\cdots &\mathbf {v} _{p+1}\end{bmatrix}}} , then k 1 v 1 + ⋯ + k p + 1 v p + 1 = 0 ⟺ k 1 = ⋯ = k p + 1 = 0 {\displaystyle k_{1}\mathbf {v_{1}} +\dots +k_{p+1}\mathbf {v} _{p+1}=\mathbf {0} \iff k_{1}=\dots =k_{p+1}=0}
Now let k = ( k 1 , … , k p + 1 ) T ∈ R ( p + 1 ) × 1 {\displaystyle \mathbf {k} =(k_{1},\dots ,k_{p+1})^{T}\in \mathbb {R} ^{(p+1)\times 1}} be an eigenvector of H {\displaystyle {\mathcal {H}}} .
k ≠ 0 ⟹ ( k 1 v 1 + ⋯ + k p + 1 v p + 1 ) 2 > 0 {\displaystyle \mathbf {k} \neq \mathbf {0} \implies \left(k_{1}\mathbf {v_{1}} +\dots +k_{p+1}\mathbf {v} _{p+1}\right)^{2}>0}
In terms of vector multiplication, this means [ k 1 ⋯ k p + 1 ] [ v 1 ⋮ v p + 1 ] [ v 1 ⋯ v p + 1 ] [ k 1 ⋮ k p + 1 ] = k T H k = λ k T k > 0 {\displaystyle {\begin{bmatrix}k_{1}&\cdots &k_{p+1}\end{bmatrix}}{\begin{bmatrix}\mathbf {v_{1}} \\\vdots \\\mathbf {v} _{p+1}\end{bmatrix}}{\begin{bmatrix}\mathbf {v_{1}} &\cdots &\mathbf {v} _{p+1}\end{bmatrix}}{\begin{bmatrix}k_{1}\\\vdots \\k_{p+1}\end{bmatrix}}=\mathbf {k} ^{\operatorname {T} }{\mathcal {H}}\mathbf {k} =\lambda \mathbf {k} ^{\operatorname {T} }\mathbf {k} >0} where λ {\displaystyle \lambda } is the eigenvalue corresponding to k {\displaystyle \mathbf {k} } . Moreover, k T k = ∑ i = 1 p + 1 k i 2 > 0 ⟹ λ > 0 {\displaystyle \mathbf {k} ^{\operatorname {T} }\mathbf {k} =\sum _{i=1}^{p+1}k_{i}^{2}>0\implies \lambda >0}
Finally, as eigenvector k {\displaystyle \mathbf {k} } was arbitrary, it means all eigenvalues of H {\displaystyle {\mathcal {H}}} are positive, therefore H {\displaystyle {\mathcal {H}}} is positive definite. Thus, β = ( X T X ) − 1 X T Y {\displaystyle {\boldsymbol {\beta }}=\left(X^{\operatorname {T} }X\right)^{-1}X^{\operatorname {T} }Y} is indeed a global minimum.
Or, just see that for all vectors v , v T X T X v = ‖ X v ‖ 2 ≥ 0 {\displaystyle \mathbf {v} ,\mathbf {v} ^{\operatorname {T} }X^{\operatorname {T} }X\mathbf {v} =\|\mathbf {X} \mathbf {v} \|^{2}\geq 0} . So the Hessian is positive definite if full rank.
Let β ~ = C y {\displaystyle {\tilde {\beta }}=Cy} be another linear estimator of β {\displaystyle \beta } with C = ( X T X ) − 1 X T + D {\displaystyle C=(X^{\operatorname {T} }X)^{-1}X^{\operatorname {T} }+D} where D {\displaystyle D} is a K × n {\displaystyle K\times n} non-zero matrix. As we're restricting to unbiased estimators, minimum mean squared error implies minimum variance. The goal is therefore to show that such an estimator has a variance no smaller than that of β ^ , {\displaystyle {\widehat {\beta }},} the OLS estimator. We calculate:
Therefore, since β {\displaystyle \beta } is un observable, β ~ {\displaystyle {\tilde {\beta }}} is unbiased if and only if D X = 0 {\displaystyle DX=0} . Then:
Since D D T {\displaystyle DD^{\operatorname {T} }} is a positive semidefinite matrix, Var ( β ~ ) {\displaystyle \operatorname {Var} \left({\tilde {\beta }}\right)} exceeds Var ( β ^ ) {\displaystyle \operatorname {Var} \left({\widehat {\beta }}\right)} by a positive semidefinite matrix.
As it has been stated before, the condition of Var ( β ~ ) − Var ( β ^ ) {\displaystyle \operatorname {Var} \left({\tilde {\beta }}\right)-\operatorname {Var} \left({\widehat {\beta }}\right)} is a positive semidefinite matrix is equivalent to the property that the best linear unbiased estimator of ℓ T β {\displaystyle \ell ^{\operatorname {T} }\beta } is ℓ T β ^ {\displaystyle \ell ^{\operatorname {T} }{\widehat {\beta }}} (best in the sense that it has minimum variance). To see this, let ℓ T β ~ {\displaystyle \ell ^{\operatorname {T} }{\tilde {\beta }}} another linear unbiased estimator of ℓ T β {\displaystyle \ell ^{\operatorname {T} }\beta } .
Moreover, equality holds if and only if D T ℓ = 0 {\displaystyle D^{\operatorname {T} }\ell =0} . We calculate
This proves that the equality holds if and only if ℓ T β ~ = ℓ T β ^ {\displaystyle \ell ^{\operatorname {T} }{\tilde {\beta }}=\ell ^{\operatorname {T} }{\widehat {\beta }}} which gives the uniqueness of the OLS estimator as a BLUE.
The generalized least squares (GLS), developed by Aitken , [ 5 ] extends the Gauss–Markov theorem to the case where the error vector has a non-scalar covariance matrix. [ 6 ] The Aitken estimator is also a BLUE.
In most treatments of OLS, the regressors (parameters of interest) in the design matrix X {\displaystyle \mathbf {X} } are assumed to be fixed in repeated samples. This assumption is considered inappropriate for a predominantly nonexperimental science like econometrics . [ 7 ] Instead, the assumptions of the Gauss–Markov theorem are stated conditional on X {\displaystyle \mathbf {X} } .
The dependent variable is assumed to be a linear function of the variables specified in the model. The specification must be linear in its parameters. This does not mean that there must be a linear relationship between the independent and dependent variables. The independent variables can take non-linear forms as long as the parameters are linear. The equation y = β 0 + β 1 x 2 , {\displaystyle y=\beta _{0}+\beta _{1}x^{2},} qualifies as linear while y = β 0 + β 1 2 x {\displaystyle y=\beta _{0}+\beta _{1}^{2}x} can be transformed to be linear by replacing β 1 2 {\displaystyle \beta _{1}^{2}} by another parameter, say γ {\displaystyle \gamma } . An equation with a parameter dependent on an independent variable does not qualify as linear, for example y = β 0 + β 1 ( x ) ⋅ x {\displaystyle y=\beta _{0}+\beta _{1}(x)\cdot x} , where β 1 ( x ) {\displaystyle \beta _{1}(x)} is a function of x {\displaystyle x} .
Data transformations are often used to convert an equation into a linear form. For example, the Cobb–Douglas function —often used in economics—is nonlinear:
But it can be expressed in linear form by taking the natural logarithm of both sides: [ 8 ]
This assumption also covers specification issues: assuming that the proper functional form has been selected and there are no omitted variables .
One should be aware, however, that the parameters that minimize the residuals of the transformed equation do not necessarily minimize the residuals of the original equation.
For all n {\displaystyle n} observations, the expectation—conditional on the regressors—of the error term is zero: [ 9 ]
where x i = [ x i 1 x i 2 ⋯ x i k ] T {\displaystyle \mathbf {x} _{i}={\begin{bmatrix}x_{i1}&x_{i2}&\cdots &x_{ik}\end{bmatrix}}^{\operatorname {T} }} is the data vector of regressors for the i th observation, and consequently X = [ x 1 T x 2 T ⋯ x n T ] T {\displaystyle \mathbf {X} ={\begin{bmatrix}\mathbf {x} _{1}^{\operatorname {T} }&\mathbf {x} _{2}^{\operatorname {T} }&\cdots &\mathbf {x} _{n}^{\operatorname {T} }\end{bmatrix}}^{\operatorname {T} }} is the data matrix or design matrix.
Geometrically, this assumption implies that x i {\displaystyle \mathbf {x} _{i}} and ε i {\displaystyle \varepsilon _{i}} are orthogonal to each other, so that their inner product (i.e., their cross moment) is zero.
This assumption is violated if the explanatory variables are measured with error , or are endogenous . [ 10 ] Endogeneity can be the result of simultaneity , where causality flows back and forth between both the dependent and independent variable. Instrumental variable techniques are commonly used to address this problem.
The sample data matrix X {\displaystyle \mathbf {X} } must have full column rank .
Otherwise X T X {\displaystyle \mathbf {X} ^{\operatorname {T} }\mathbf {X} } is not invertible and the OLS estimator cannot be computed.
A violation of this assumption is perfect multicollinearity , i.e. some explanatory variables are linearly dependent. One scenario in which this will occur is called "dummy variable trap," when a base dummy variable is not omitted resulting in perfect correlation between the dummy variables and the constant term. [ 11 ]
Multicollinearity (as long as it is not "perfect") can be present resulting in a less efficient, but still unbiased estimate. The estimates will be less precise and highly sensitive to particular sets of data. [ 12 ] Multicollinearity can be detected from condition number or the variance inflation factor , among other tests.
The outer product of the error vector must be spherical.
This implies the error term has uniform variance ( homoscedasticity ) and no serial correlation . [ 13 ] If this assumption is violated, OLS is still unbiased, but inefficient . The term "spherical errors" will describe the multivariate normal distribution : if Var [ ε ∣ X ] = σ 2 I {\displaystyle \operatorname {Var} [\,{\boldsymbol {\varepsilon }}\mid \mathbf {X} ]=\sigma ^{2}\mathbf {I} } in the multivariate normal density, then the equation f ( ε ) = c {\displaystyle f(\varepsilon )=c} is the formula for a ball centered at μ with radius σ in n-dimensional space. [ 14 ]
Heteroskedasticity occurs when the amount of error is correlated with an independent variable. For example, in a regression on food expenditure and income, the error is correlated with income. Low income people generally spend a similar amount on food, while high income people may spend a very large amount or as little as low income people spend. Heteroskedastic can also be caused by changes in measurement practices. For example, as statistical offices improve their data, measurement error decreases, so the error term declines over time.
This assumption is violated when there is autocorrelation . Autocorrelation can be visualized on a data plot when a given observation is more likely to lie above a fitted line if adjacent observations also lie above the fitted regression line. Autocorrelation is common in time series data where a data series may experience "inertia." If a dependent variable takes a while to fully absorb a shock. Spatial autocorrelation can also occur geographic areas are likely to have similar errors. Autocorrelation may be the result of misspecification such as choosing the wrong functional form. In these cases, correcting the specification is one possible way to deal with autocorrelation.
When the spherical errors assumption may be violated, the generalized least squares estimator can be shown to be BLUE. [ 6 ] | https://en.wikipedia.org/wiki/Gauss–Markov_theorem |
Gay-Lussac's law usually refers to Joseph-Louis Gay-Lussac 's law of combining volumes of gases , discovered in 1808 and published in 1809. [ 1 ] However, it sometimes refers to the proportionality of the volume of a gas to its absolute temperature at constant pressure . The latter law was published by Gay-Lussac in 1802, [ 2 ] but in the article in which he described his work, he cited earlier unpublished work from the 1780s by Jacques Charles . Consequently, the volume-temperature proportionality is usually known as Charles's law .
The law of combining volumes states that when gases chemically react together, they do so in amounts by volume which bear small whole-number ratios (the volumes calculated at the same temperature and pressure).
The ratio between the volumes of the reactant gases and the gaseous products can be expressed in simple whole numbers .
For example, Gay-Lussac found that two volumes of hydrogen react with one volume of oxygen to form two volumes of gaseous water. Expressed concretely, 100 mL of hydrogen combine with 50 mL of oxygen to give 100 mL of water vapor: Hydrogen(100 mL) + Oxygen(50 mL) = Water(100 mL). Thus, the volumes of hydrogen and oxygen which combine (i.e., 100mL and 50mL) bear a simple ratio of 2:1, as also is the case for the ratio of product water vapor to reactant oxygen.
Based on Gay-Lussac's results, Amedeo Avogadro hypothesized in 1811 that, at the same temperature and pressure, equal volumes of gases (of whatever kind) contain equal numbers of molecules ( Avogadro's law ). He pointed out that if this hypothesis is true, then the previously stated result
could also be expressed as
The law of combining volumes of gases was announced publicly by Joseph Louis Gay-Lussac on the last day of 1808, and published in 1809. [ 3 ] [ 4 ] Since there was no direct evidence for Avogadro's molecular theory, very few chemists adopted Avogadro's hypothesis as generally valid until the Italian chemist Stanislao Cannizzaro argued convincingly for it during the First International Chemical Congress in 1860. [ 5 ]
In the 17th century Guillaume Amontons discovered a regular relationship between the pressure and temperature of a gas at constant volume. Some introductory physics textbooks still define the pressure-temperature relationship as Gay-Lussac's law. [ 6 ] [ 7 ] [ 8 ] Gay-Lussac primarily investigated the relationship between volume and temperature and published it in 1802, but his work did cover some comparison between pressure and temperature. [ 9 ] Given the relative technology available to both men, Amontons could only work with air as a gas, whereas Gay-Lussac was able to experiment with multiple types of common gases, such as oxygen, nitrogen, and hydrogen. [ 10 ]
Regarding the volume-temperature relationship, Gay-Lussac attributed his findings to Jacques Charles because he used much of Charles's unpublished data from 1787 – hence, the law became known as Charles's law or the law of Charles and Gay-Lussac . [ 11 ]
Amontons's, Charles' , and Boyle's law form the combined gas law . These three gas laws in combination with Avogadro's law can be generalized by the ideal gas law .
Gay-Lussac used the formula acquired from ΔV/V = αΔT to define the rate of expansion α for gases. For air, he found a relative expansion ΔV/V = 37.50% and obtained a value of α = 37.50%/100 °C = 1/266.66 °C which indicated that the value of absolute zero was approximately 266.66 °C below 0 °C. [ 12 ] The value of the rate of expansion α is approximately the same for all gases and this is also sometimes referred to as Gay-Lussac's law . See the introduction to this article, and Charles's law . | https://en.wikipedia.org/wiki/Gay-Lussac's_law |
The Gay-Lussac–Humboldt Prize is a German–French science prize. It was created in 1981 by French President Valéry Giscard d'Estaing and German Chancellor Helmut Schmidt based on the recommendation of the German and French research ministries. [ 1 ] The prize money is €60,000. [ 1 ]
The prize is awarded to researchers that have made outstanding contributions in science, especially in cooperation between the two countries. Four to five German and French scientists from all research disciplines are honored with this award every year. The prize was originally named after Alexander von Humboldt and carries since 1997 the double name Gay-Lussac–Humboldt.
The Gay-Lussac-Humboldt Award is granted by the French Ministry of Higher Education and Research to German researchers nominated by French researchers. On the other hand, it is awarded by the Alexander von Humboldt Foundation to French researchers nominated by German scientists. | https://en.wikipedia.org/wiki/Gay-Lussac–Humboldt_Prize |
The gaze heuristic falls under the category of tracking heuristics , and it is used in directing correct motion to achieve a goal using one main variable. [ 1 ] McLeod & Dienes' (1996) example of the gaze heuristic is catching a ball. [ 2 ] [ 3 ]
Gerd Gigerenzer categorizes the gaze heuristic under tracking heuristics , [ 4 ] where human animals and non-human animals are able to process large amounts of information quickly and react, regardless of whether the information is consciously processed. [ 5 ]
The gaze heuristic is a critical element in animal behavior, being used in predation heavily. [ 6 ] At the most basic level, the gaze heuristic ignores all casual relevant variables to make quick gut reactions .
A catcher using the gaze heuristic observes the initial angle of the ball and runs towards it in such a way as to keep this angle constant. [ 7 ] The gaze heuristic does not require knowledge of any of the variables required by the optimizing approach, nor does it require the catcher to integrate information, yet it allows the catcher to catch the ball. [ 8 ] The gaze heuristic may therefore be described at ecologically rational at least in the simple case of catching a ball in the air. | https://en.wikipedia.org/wiki/Gaze_heuristic |
Gbanga is a geolocation-based social gaming platform for mobile phones developed by Zurich -based startup, Millform AG. The platform runs on real-time locative media , developed in-house, which means that the gaming environment changes relative to the players real-world location. Players can interact with each other using built-in social and chat functions, which indicate their current real-world locations as well as online and offline status. Additional features enable social gaming in forms such as exploring , collecting and trading .
Gbanga Zooh [ 1 ] was the first game to be published on the platform in August 2009, in cooperation with Zurich Zoo . The game encouraged players to maintain virtual habitats across the Canton of Zurich in order to attract and collect endangered species of animals.
The advent calendar game, Gbanga Santa , was launched in December 2009 in Zurich . Players solved puzzles to find the real-world locations of virtual gifts scattered around the city. Once found and collected, virtual gifts were then tradable for prizes provided by sponsors. [ 2 ]
April 2010 saw the launch of Gbanga Famiglia , [ 3 ] a game in which players can join or start their own Mafia Famiglia to take-over virtual establishments they discover whilst walking around the city. Establishments are linked to real-world establishments, so players must physically move between locations to play. A successful take-over depends on the Famiglia's power, determined by the number of Famiglia members and the cash total for special items collected.
In 2014, the sequel game Famiglia Rise and Fall was announced on crowd-funding platform Indiegogo. [ 4 ] The yet to be developed game technology is described to be an evolution of the previously 2-dimensional games that used flat material only. The new game uses the 3D engine Unity and renders a 3-dimensional world based on open data from OpenStreetMap . Apparently, Millform AG , the company behind Gbanga decided to use crowd-funding for financing the project to be more independent from traditional publishers. [ 5 ] [ 6 ]
The commissioned mixed-reality game Gross. Stadt. Jagd. [ 7 ] (or Urban. Hunt. in English) was performed in May 2015 in Zurich. In a mixed-real manhunt, several thousand participants equipped with a GPS app were running in the streets of Zurich to avoid the hunter, a sponsored car. The real-time app synchronized the GPS positions of all participants and evaluated the last man standing.
In 2018, NBCUniversal announced the video game Voltron: Cubes of Olkarion , the winner of their developer competition Universal GameDev Challenge that had offered game developers the opportunity to use some of Universal's IP. [ 8 ] In 2019, Voltron: Cubes of Olkarion which is based on the Voltron: Legendary Defender series, was made available on the Steam store in Early Access, the platform's experimental game program. [ 9 ] In the game, players compete in real-time player vs player (PvP) game battles by placing own and destroying opponent blocks with different features in a game board with a grid. [ 10 ]
The J2ME -based version uses a real-time locating system referencing a network of cell sites to determine the players location, whilst the iPhone version of the platform uses GPS . The platform is available worldwide as an application download for the iPhone and J2ME compatible phones.
Whilst Gbanga studio continues to develop games in-house, the playing community itself can also generate interactive content using Gbanga's Puppetmaster API , coded with Lua (programming language) .
In later productions, such as Voltron: Cubes of Olkarion , game engine Unity was used. [ 10 ]
Gbanga was nominated for Business Idea 2010 by Internet World Business, [ 11 ] shortlisted for Best of Swiss Web Award 2010 [ 12 ] and shortlisted in the category of Best Real World Game for the International Mobile Gaming Awards. [ 2 ]
In 2011, Gbanga won the AppCircus Spotlight on Blackberry, hosted in Barcelona. [ 13 ]
In 2019, Gbanga's real-time strategy video game Voltron: Cubes of Olkarion, based on the Voltron: Legendary Defender TV show, was entered into and won the 2018 Universal GameDev Challenge where over the course of six weeks, over 500 entries from contestants in over 60 countries were submitted. [ 14 ] | https://en.wikipedia.org/wiki/Gbanga |
Gboard is a virtual keyboard app developed by Google for Android and iOS devices. It was first released on iOS in May 2016, followed by a release on Android in December 2016, debuting as a major update to the already-established Google Keyboard app on Android.
Gboard features Google Search , including web results (removed since April 2020) [ 8 ] and predictive answers, easy searching and sharing of GIF and emoji content, a predictive typing engine suggesting the next word depending on context, and multilingual language support. Updates to the keyboard have enabled additional functionality, including GIF suggestions, options for a dark color theme or adding a personal image as the keyboard background, support for voice dictation , next-phrase prediction, and hand-drawn emoji recognition. At the time of its launch on iOS, the keyboard only offered support for the English language, with more languages being gradually added in the following months, whereas on Android, the keyboard supported more than 100 languages at the time of release.
In August 2018, Gboard passed 1 billion installs on the Google Play Store , making it one of the most popular Android apps. [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] This is measured by the Google Play Store and includes downloads by users as well as pre-installed instances of the app. [ 9 ] As of April 2025, the app has been downloaded more than 10B times [ 14 ] from the Google Play Store .
Gboard is a virtual keyboard app. It features Google Search , including web results (removed for Android version of the app) and predictive answers, easy searching and sharing of GIF and emoji content, and a predictive typing engine suggesting the next word depending on context. [ 15 ] At its May 2016 launch on iOS, Gboard only supported the English language, [ 15 ] while it supported "more than 100 languages" at the time of its launch on the Android platform. Google states that Gboard will add more languages "over the coming months". [ 10 ] As of October 2019, 916 languages are supported on Android. [ 16 ]
Gboard features Floating Keyboard [ 17 ] and Google Translate in Gboard itself. [ 18 ] Gboard supports one-handed mode on Android after its May 2016 update. This functionality was added to the app when it was branded as Google Keyboard. [ 19 ] Gboard supports a variety of different keyboard layouts including QWERTY , QWERTZ , AZERTY , Dvorak and Colemak . [ 20 ]
An update for the iOS app released in August 2016 added French, German, Italian, Portuguese, and Spanish languages, as well as offering "smart GIF suggestions", where the keyboard will suggest GIFs relevant to text written. The keyboard also offers new options for a dark theme or adding a personal image from the camera roll as the keyboard's background. [ 21 ] Another new update in March 2018 added Croatian, Czech, Danish, Dutch, Finnish, Greek, Polish, Romanian, Balochi, Swedish, Catalan, Hungarian, Malay, Russian, Latin American Spanish, and Turkish languages, along with support for voice dictation, enabling users to "long press the mic button on the space bar and talk". [ 22 ] [ 23 ] In April 2017, Google significantly increased the amount of Indian languages supported on Gboard, adding 11 new languages, bringing the total number of supported Indian languages to 22. [ 24 ] [ 25 ]
In June 2017, the Android app was updated to support recognition of hand-drawn emoji and the ability to predict whole phrases rather than single words. The functionality is expected to come to the iOS app at a later time. [ 26 ] [ 27 ] Offline voice recognition was added in March 2019. [ 28 ] [ 29 ]
On February 12, 2020, a new feature "Emoji Kitchen [ 30 ] " was introduced that allowed users to mash up different emoji and use them as stickers when messaging. [ 31 ] Grammar correction was introduced in October 2021, first on the Pixel 6 series. [ 32 ]
In 2016, The Wall Street Journal praised the keyboard, particularly the integrated Google search feature. However, it was noted that the app does not currently support integration with other apps on the device, meaning that queries such as "Buy Captain America movie tickets" sends users to the web browser rather than an app for movie tickets installed on their phone. The Wall Street Journal also praised the predictive typing engine, stating that it "blows past most competitors" and "it gets smarter with use". They also discovered that Gboard "cleverly suggests emojis as you type words". It was noted that there was the lack of a one-handed mode (a feature added in May 2016 for Android), as well as a lack of options for changing color or the size of keys, writing that "If you're looking to customize a keyboard, Gboard isn't for you." [ 33 ]
Gboard has received criticism regarding privacy, data collection, and user interface changes. While Google states that Gboard does not transmit the actual content of user's keystrokes, independent analyses have shown that the app collects and sends metadata, such as: device information, app usage statistics, and unique device identifiers to Google servers. [ 34 ] [ 35 ] Google also states that voice input is processed directly on the device, however using the "Fix" feature (to correct dictated text), sends the voice input to Google servers for processing. [ 36 ] Additionally, using the integrated search feature requires transmitting user input to Google. [ 37 ]
Design updates have also attracted negative feedback, particularly changes to the keyboard's layout and key shapes. For instance, a redesign introducing more rounded keys was widely criticized by users for reducing typing accuracy and limiting customization options. [ 38 ] [ 39 ] | https://en.wikipedia.org/wiki/Gboard |
In engineering and physics , g c is a unit conversion factor used to convert mass to force or vice versa. [ 1 ] It is defined as
In unit systems where force is a derived unit , like in SI units , g c is equal to 1. In unit systems where force is a primary unit, like in imperial and US customary measurement systems , g c may or may not equal 1 depending on the units used, and value other than 1 may be required to obtain correct results. [ 2 ] For example, in the kinetic energy (KE) formula, if g c = 1 is used, then KE is expressed in foot-poundals ; but if g c = 32.174 is used, then KE is expressed in foot-pounds .
According to Newton's second law , the force F is proportional to the product of mass m and acceleration a :
or
If F = 1 lbf , m = 1 lb , and a = 32.174 ft/s 2 , then
Leading to
g c is defined as the reciprocal of the constant K
or equivalently, as | https://en.wikipedia.org/wiki/Gc_(engineering) |
Gadolinium(III) nitrate is an inorganic compound of gadolinium . This salt is used as a water-soluble neutron poison in nuclear reactors . [ 1 ] Gadolinium nitrate, like all nitrate salts, is an oxidizing agent .
The most common form of this substance is hexahydrate Gd(NO 3 ) 3 •6H 2 O with molecular weight 451.36 g/mol and CAS Number: 19598-90-4. [1]
Gadolinium nitrate was used at the Savannah River Site heavy water nuclear reactors and had to be separated from the heavy water for storage or reuse. [ 2 ] [ 3 ] The Canadian CANDU reactor , a pressurized heavy water reactor, also uses gadolinium nitrate as a water-soluble neutron poison in heavy water .
Gadolinium nitrate is also used as a raw material in the production of other gadolinium compounds, for production of specialty glasses and ceramics and as a phosphor . | https://en.wikipedia.org/wiki/Gd(NO3)3 |
Gadolinium oxysulfide ( Gd 2 O 2 S ), also called gadolinium sulfoxylate , GOS or Gadox , is an inorganic compound, a mixed oxide - sulfide of gadolinium .
The main use of gadolinium oxysulfide is in ceramic scintillators . Scintillators are used in radiation detectors for medical diagnostics . The scintillator is the primary radiation sensor that emits light when struck by high energy photons. Gd 2 O 2 S based ceramics exhibit final densities of 99.7% to 99.99% of the theoretical density (7.32 g/cm 3 ) and an average grain size ranging from 5 micrometers to 50 micrometers in dependence with the fabrication procedure. [ 1 ] Two powder preparation routes have been successful for synthesizing Gd 2 O 2 S: Pr, Ce, F powder complexes for the ceramic scintillators. These preparations routes are called the halide flux method and the sulfite precipitation method. The scintillation properties of Gd 2 O 2 S: Pr, Ce, F complexes demonstrate that this scintillator is promising for imaging applications. There are two main disadvantages to this scintillator; one being the hexagonal crystal structure, which emits only optical translucency and low external light collection at the photodiode. The other disadvantage is the high X-ray damage to the sample. [ 2 ]
Terbium - activated gadolinium oxysulfide is frequently used as a scintillator for x-ray imaging. It emits wavelengths between 382-622 nm, though the primary emission peak is at 545 nm. It is also used as a green phosphor in projection CRTs , though its drawback is marked lowering of efficiency at higher temperatures. [1] Variants include, for example, using praseodymium instead of terbium ( CAS registry number 68609-42-7, [ 3 ] EINECS number 271-826-9), or using a mixture of dysprosium and terbium for doping (CAS number 68609-40-5, [ 4 ] EINECS number 271-824-8).
Gadolinium oxysulfide is a promising luminescent host material, because of its high density (7.32 g/cm 3 ) and high effective atomic number of Gd. These characteristics lead to a high interaction probability for X-ray radiation. Several synthesis routes have been developed for processing Gd 2 O 2 S phosphors, including: solid state reaction method, reduction method, combustion synthesis method, emulsion liquid membrane method, and gas sulfuration method. The solid state reaction method and reduction methods are most commonly used because of their high reliability, low cost, and high luminescent properties. (Gd0.99, Pr0.01)2O2S sub-microphosphors synthesized by homogeneous precipitation method are very promising for a new green emitting material to be applied to the high resolution digital X-ray imaging field [ 5 ] Gadolinium oxysulfide powder phosphors are intensively used for conversion of X-rays to visible light in medical X-ray imaging. Gd 2 O 2 S: Pr based solid state X-ray detectors have been successfully reintroduced to X-ray sampling in medical computed tomography (imaging by sections or sectioning, through the use of any kind of penetrating wave).
The crystal structure of gadolinium oxysulfide has Trigonal symmetry (space group number 164). Each gadolinium ion is coordinated by four oxygen atoms and three sulfur atoms in a non-inversion symmetric arrangement. The Gd 2 O 2 S structure is a sulfur layer with double layers of gadolinium and oxygen in between. [ 6 ]
An approved respirator should be worn if exposure to dust could occur when working with gadolinium oxysulfide. Inhalation may result in lung injuries. Exposure to gadolinium compounds may cause lung and/or liver damage. Gloves are highly recommended when skin contact is likely. Contact with the skin may cause rash, redness or dermatitis. Gadolinium oxysulfide should be stored away from mineral acids, strong oxidizers and flammable materials. When Gadolinium oxysulfide comes in contact with mineral acids, hydrogen sulfide can be produced. [ 7 ] | https://en.wikipedia.org/wiki/Gd2O2S |
Gadolinium(III) oxide (archaically gadolinia ) is an inorganic compound with the formula Gd 2 O 3 . It is one of the most commonly available forms of the rare-earth element gadolinium , derivatives, of which are potential contrast agents for magnetic resonance imaging . [ 2 ]
Gadolinium oxide adopts two structures. The cubic ( cI80 , Ia 3 ), No. 206 ) structure is similar to that of manganese(III) oxide and heavy trivalent lanthanide sesquioxides. The cubic structure features two types of gadolinium sites, each with a coordination number of 6 but with different coordination geometries. The second polymorph is monoclinic ( Pearson symbol mS30, space group C2/m, No. 12). [ 3 ] At room temperature, the cubic structure is more stable. The phase change to the monoclinic structure takes place at 1200 °C. Above 2100 °C to the melting point at 2420 °C, a hexagonal phase dominates. [ 4 ]
Gadolinium oxide can be formed by thermal decomposition of the hydroxide, nitrate, carbonate, or oxalates. [ 5 ] Gadolinium oxide forms on the surface of gadolinium metal.
Gadolinium oxide is a rather basic oxide, indicated by its ready reaction with carbon dioxide to give carbonates. It dissolves readily in the common mineral acids with the complication that the oxalate , fluoride, sulfate and phosphate are very insoluble in water and may coat the grains of oxide, thereby preventing the complete dissolution. [ 6 ]
Several methods are known for the synthesis of gadolinium oxide nanoparticles , mostly based on precipitation of the hydroxide by the reaction of gadolinium ions with hydroxide, followed by thermal dehydration to the oxide. The nanoparticles are always coated with a protective material to avoid the formation of larger polycrystalline aggregates. [ 7 ] [ 8 ] [ 9 ]
Nanoparticles of gadolinium oxide is a potential contrast agent for magnetic resonance imaging (MRI). A dextran -coated preparation of 20–40 nm sized gadolinium oxide particles had a relaxivity of 4.8 s −1 mM −1 per gadolinium ion at 7.05 T (an unusually high field compared to the clinically used MRI scanners which mostly range from 0.5 to 3 T). [ 7 ] Smaller particles, between 2 and 7 nm, were tested as an MRI agent. [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Gd2O3 |
Gadolinium(III) chloride , also known as gadolinium trichloride , is GdCl 3 . It is a colorless, hygroscopic, water-soluble solid. The hexahydrate GdCl 3 ∙6H 2 O is commonly encountered and is sometimes also called gadolinium trichloride. Gd 3+ species are of special interest because the ion has the maximum number of unpaired spins possible, at least for known elements. With seven valence electrons and seven available f-orbitals, all seven electrons are unpaired and symmetrically arranged around the metal. The high magnetism and high symmetry combine to make Gd 3+ a useful component in NMR spectroscopy and MRI.
GdCl 3 is usually prepared by the " ammonium chloride " route, which involves the initial synthesis of (NH 4 ) 2 [GdCl 5 ]. This material can be prepared from the common starting materials at reaction temperatures of 230 °C from gadolinium oxide : [ 2 ]
from hydrated gadolinium chloride:
from gadolinium metal:
In the second step the pentachloride is decomposed at 300 °C:
This pyrolysis reaction proceeds via the intermediacy of NH 4 [Gd 2 Cl 7 ].
The ammonium chloride route is more popular and less expensive than other methods. GdCl 3 can, however, also be synthesized by the reaction of solid Gd at 600 °C in a flowing stream of HCl . [ 3 ]
Gadolinium(III) chloride also forms a hexahydrate , GdCl 3 ∙6H 2 O. The hexahydrate is prepared by gadolinium(III) oxide (or chloride) in concentrated HCl followed by evaporation. [ 4 ]
GdCl 3 crystallizes with a hexagonal UCl 3 structure, as seen for other 4f trichlorides including those of La , Ce , Pr , Nd , Pm , Sm , Eu . [ 5 ] The following crystallize in theYCl 3 motif: DyCl 3 , HoCl 3 , ErCl 3 , TmCl 3 , YdCl 3 , LuCl 3 , YCl 3 ). The UCl 3 motif features 9-coordinate metal with a tricapped trigonal prismatic coordination sphere . In the hexahydrate of gadolinium(III) chloride and other smaller 4f trichlorides and tribromides, six H 2 O molecules and 2 Cl − ions coordinate to the cations resulting in a coordination group of 8.
Gadolinium salts are of primary interest for relaxation agents in magnetic resonance imaging ( MRI ). This technique exploits the fact that Gd 3+ has an electronic configuration of f 7 . Seven is the largest number of unpaired electron spins possible for an atom, so Gd 3+ is a key component in the design of highly paramagnetic complexes. [ 6 ] To generate the relaxation agents, Gd 3+ sources such as GdCl 3 ∙6H 2 O are converted to coordination complexes . GdCl 3 ∙6H 2 O can not be used as an MRI contrasting agent due to its low solubility in water at the body's near neutral pH. [ 7 ] "Free" gadolinium(III), e.g. [GdCl 2 (H 2 O) 6 ] + , is toxic , so chelating agents are essential for biomedical applications. Simple monodentate or even bidentate ligands will not suffice because they do not remain bound to Gd 3+ in solution. Ligands with higher coordination numbers therefore are required. The obvious candidate is EDTA 4− , ethylenediaminetetraacetate, which is a commonly employed hexadentate ligand used to complex to transition metals. In lanthanides, however, exhibit coordination numbers greater than six, so still larger aminocarboxylates are employed.
One representative chelating agent is H 5 DTPA, diethylenetriaminepentaacetic acid. [ 8 ] Chelation to the conjugate base of this ligand increases the solubility of the Gd 3+ at the body's neutral pH and still allows for the paramagnetic effect required for an MRI contrast agent. The DTPA 5− ligand binds to Gd through five oxygen atoms of the carboxylates and three nitrogen atoms of the amines. A 9th binding site remains, which is occupied by a water molecule. The rapid exchange of this water ligand with bulk water is a major reason for the signal enhancing properties of the chelate. The structure of [Gd(DTPA)(H 2 O)] 2− is a distorted tricapped trigonal prism.
The following is the reaction for the formation of Gd-DTPA: | https://en.wikipedia.org/wiki/GdCl3 |
Gadolinium(III) fluoride is an inorganic compound with a chemical formula GdF 3 .
Gadolinium(III) fluoride can be prepared by heating gadolinium oxide and ammonium bifluoride . The reaction involves two steps: [ 1 ] [ 2 ]
Alternatively, reacting gadolinium chloride with hydrofluoric acid and adding hot water produces GdF 3 ·xH 2 O (x=0.53). Anhydrous gadolinium(III) fluoride can then be produced by heating the hydrate with ammonium bifluoride; without the bifluoride, GdOF is formed instead. [ 3 ]
Gadolinium(III) fluoride is a white solid that is insoluble in water. It has an orthorhombic crystal structure with the space group Pnma (space group no. 62). [ 4 ]
Gadolinium(III) fluoride is used to produce fluoride glasses . [ 5 ] | https://en.wikipedia.org/wiki/GdF3 |
Gadolinium phosphide is an inorganic compound of gadolinium and phosphorus with the chemical formula GdP. [ 1 ] [ 2 ]
Gadolinium phosphide can be obtained by reacting gadolinium and phosphorus at high temperature, and single crystals can be obtained by mineralization. [ 3 ]
GdP has a NaCl -structure and transforms to a CsCl -structure at 40 GPa. [ 4 ]
GdP forms crystals of a cubic system , space group Fm 3 m . [ 5 ] [ 6 ]
Gadolinium phosphide is antiferromagnetic . [ citation needed ]
The compound is a semiconductor used in high power, high frequency applications and in laser diodes . [ 1 ] [ 7 ] | https://en.wikipedia.org/wiki/GdP |
Guanidinium chloride or guanidine hydrochloride , usually abbreviated GdmCl and sometimes GdnHCl or GuHCl, is the hydrochloride salt of guanidine .
Guanidinium chloride crystallizes in orthorhombic space group Pbca . The crystal structure consists of a network of guanidinium cations and chloride anions linked by N–H···Cl hydrogen bonds . [ 2 ]
Guanidinium chloride is a weak acid with a pK a of 13.6. The reason that it is such a weak acid is the complete delocalization of the positive charge through three nitrogen atoms (plus a little bit of positive charge on carbon). However, some stronger bases can deprotonate it, such as sodium hydroxide :
The equilibrium is not complete because the acidity difference between guanidinium and water is not large. The approximate pK a values: 13.6 vs 15.7.
Complete deprotonation should be done with extremely strong bases, such as lithium diisopropylamide .
Guanidinium chloride is a strong chaotrope and one of the strongest denaturants used in physiochemical studies of protein folding . It also has the ability to decrease enzyme activity and increase the solubility of hydrophobic molecules. [ 2 ] At high concentrations of guanidinium chloride (e.g., 6 M ), proteins lose their ordered structure , and they tend to become randomly coiled , i.e. they do not contain any residual structure. However, at concentrations in the millimolar range in vivo, guanidinium chloride has been shown to "cure" prion positive yeast cells (i.e. cells exhibiting a prion positive phenotype revert to a prion negative phenotype). This is the result of inhibition of the Hsp104 chaperone protein known to play an important role in prion fiber fragmentation and propagation. [ 3 ] [ 4 ] [ 5 ]
Petrunkin and Petrunkin (1927, 1928) appear to be the first who studied the binding of GnHCl to gelatin and a mixture of thermally denatured protein from brain extract. Greenstein (1938, 1939), however, appears to be the first to discover the high denaturing action of guanidinium halides and thiocyanates in following the liberation of sulfhydryl groups in ovalbumin and other proteins as a function of salt concentration. [ 6 ]
Guanidine hydrochloride is indicated for the reduction of the symptoms of muscle weakness and easy fatigability associated with Lambert-Eaton myasthenic syndrome . It is not indicated for treating myasthenia gravis. It apparently acts by enhancing the release of acetylcholine following a nerve impulse. It also appears to slow the rates of depolarization and repolarization of muscle cell membranes. Initial dosage is usually between 10 and 15 mg/kg (5 to 7 mg/pound) of body weight per day in 3 or 4 divided doses. This dosage may be gradually increased to a total daily dosage of 35 mg/kg (16 mg/pound) of body weight per day or up to the development of side effects. Side effects may include increased peristalsis, diarrhea, paresthesia (tingling and numbness), and nausea. Fatal bone-marrow suppression, apparently dose related, can occur with guanidine. [ 7 ] | https://en.wikipedia.org/wiki/GdmCl |
GeSbTe ( germanium-antimony-tellurium or GST ) is a phase-change material from the group of chalcogenide glasses used in rewritable optical discs and phase-change memory applications. Its recrystallization time is 20 nanoseconds, allowing bitrates of up to 35 Mbit /s to be written and direct overwrite capability up to 10 6 cycles. It is suitable for land-groove recording formats. It is often used in rewritable DVDs . New phase-change memories are possible using n-doped GeSbTe semiconductor . The melting point of the alloy is about 600 °C (900 K) and the crystallization temperature is between 100 and 150 °C.
During writing, the material is erased, initialized into its crystalline state, with low-intensity laser irradiation. The material heats up to its crystallization temperature, but not its melting point, and crystallizes. The information is written at the crystalline phase, by heating spots of it with short (<10 ns), high-intensity laser pulses; the material melts locally and is quickly cooled, remaining in the amorphous phase. As the amorphous phase has lower reflectivity than the crystalline phase, data can be recorded as dark spots on the crystalline background. Recently, novel liquid organogermanium precursors, such as isobutylgermane [ 1 ] [ 2 ] [ 3 ] (IBGe) and tetrakis(dimethylamino)germane [ 4 ] [ 5 ] (TDMAGe) were developed and used in conjunction with the metalorganics of antimony and tellurium , such as tris-dimethylamino antimony (TDMASb) and di-isopropyl telluride (DIPTe) respectively, to grow GeSbTe and other chalcogenide films of very high purity by metalorganic chemical vapor deposition (MOCVD). Dimethylamino germanium trichloride [ 6 ] (DMAGeC) is also reported as the chloride containing and superior dimethylaminogermanium precursor for Ge deposition by MOCVD.
GeSbTe is a ternary compound of germanium , antimony , and tellurium , with composition GeTe-Sb 2 Te 3 . In the GeSbTe system, there is a pseudo-line as shown upon which most of the alloys lie. Moving down this pseudo-line, it can be seen that as we go from Sb 2 Te 3 to GeTe, the melting point and glass transition temperature of the materials increase, crystallization speed decreases and data retention increases. Hence, in order to get high data transfer rate, we need to use material with fast crystallization speed such as Sb 2 Te 3 . This material is not stable because of its low activation energy. On the other hand, materials with good amorphous stability like GeTe has slow crystallization speed because of its high activation energy. In its stable state, crystalline GeSbTe has two possible configurations: hexagonal and a metastable face-centered cubic (FCC) lattice. When it is rapidly crystallized however, it was found to have a distorted rocksalt structure. GeSbTe has a glass transition temperature of around 100 °C. [ 7 ] GeSbTe also has many vacancy defects in the lattice, of 20 to 25% depending on the specific GeSbTe compound. Hence, Te has an extra lone pair of electrons, which are important for many of the characteristics of GeSbTe. Crystal defects are also common in GeSbTe and due to these defects, an Urbach tail in the band structure is formed in these compounds. GeSbTe is generally p type and there are many electronic states in the band gap accounting for acceptor and donor like traps. GeSbTe has two stable states, crystalline and amorphous. The phase change mechanism from high resistance amorphous phase to low resistance crystalline phase in nano-timescale and threshold switching are two of the most important characteristic of GeSbTe.
The unique characteristic that makes phase-change memory useful as a memory is the ability to effect a reversible phase change when heated or cooled, switching between stable amorphous and crystalline states. These alloys have high resistance in the amorphous state ‘0’ and are semimetals in the crystalline state ‘1’. In amorphous state, the atoms have short-range atomic order and low free electron density. The alloy also has high resistivity and activation energy. This distinguishes it from the crystalline state having low resistivity and activation energy, long-range atomic order and high free electron density. When used in phase-change memory, use of a short, high amplitude electric pulse such that the material reaches melting point and rapidly quenched changes the material from crystalline phase to amorphous phase is widely termed as RESET current and use of a relatively longer, low amplitude electric pulse such that the material reaches only the crystallization point and given time to crystallize allowing phase change from amorphous to crystalline is known as SET current.
The early devices were slow, power consuming and broke down easily due to the large currents. Therefore, it did not succeed as SRAM and flash memory took over. In the 1980s though, the discovery of germanium-antimony-tellurium (GeSbTe) meant that phase-change memory now needed less time and power to function. This resulted in the success of the rewriteable optical disk and created renewed interest in the phase-change memory. The advances in lithography also meant that previously excessive programming current has now become much smaller as the volume of GeSbTe that changes phase is reduced.
Phase-change memory has many near ideal memory qualities such as non-volatility , fast switching speed, high endurance of more than 10 13 read –write cycles, non-destructive read, direct overwriting and long data retention time of more than 10 years. The one advantage that distinguishes it from other next generation non-volatile memory like magnetic random access memory (MRAM) is the unique scaling advantage of having better performance with smaller sizes. The limit to which phase-change memory can be scaled is hence limited by lithography at least until 45 nm. Thus, it offers the biggest potential of achieving ultra-high memory density cells that can be commercialized.
Though phase-change memory offers much promise, there are still certain technical problems that need to be solved before it can reach ultra-high density and commercialized. The most important challenge for phase-change memory is to reduce the programming current to the level that is compatible with the minimum MOS transistor drive current for high-density integration. Currently, the programming current in phase-change memory is substantially high. This high current limits the memory density of the phase-change memory cells as the current supplied by the transistor is not sufficient due to their high current requirement. Hence, the unique scaling advantage of phase-change memory cannot be fully utilized.
The typical phase-change memory device design is shown. It has layers including the top electrode, GST, the GeSbTe layer, BEC, the bottom electrode and the dielectric layers. The programmable volume is the GeSbTe volume that is in contact with the bottom electrode. This is the part that can be scaled down with lithography. The thermal time constant of the device is also important. The thermal time constant must be fast enough for GeSbTe to cool rapidly into the amorphous state during RESET but slow enough to allow crystallization to occur during SET state. The thermal time constant depends on the design and material the cell is built. To read, a low current pulse is applied to the device. A small current ensures the material does not heat up. Information stored is read out by measuring the resistance of the device.
Threshold switching occurs when GeSbTe goes from a high resistive state to a conductive state at the threshold field of about 56 V/um. [ 8 ] This can be seen from the current - voltage (IV) plot, where current is very low in the amorphous state at low voltage until threshold voltage is reached. Current increases rapidly after the voltage snapback . The material is now in the amorphous "ON" state, where the material is still amorphous, but in a pseudo-crystalline electric state. In crystalline state, the IV characteristics is ohmic . There had been debate on whether threshold switching was an electrical or thermal process. There were suggestions that the exponential increase in current at threshold voltage must have been due to generation of carriers that vary exponentially with voltage such as impact ionization or tunneling . [ 9 ]
Recently, much research has focused on the material analysis of the phase-change material in an attempt to explain the high speed phase change of GeSbTe. Using EXAFS , it was found that the most matching model for crystalline GeSbTe is a distorted rocksalt lattice and for amorphous a tetrahedral structure. The small change in configuration from distorted rocksalt to tetrahedral suggests that nano-timescale phase change is possible [ 10 ] as the major covalent bonds are intact and only the weaker bonds are broken.
Using the most possible crystalline and amorphous local structures for GeSbTe, the fact that density of crystalline GeSbTe is less than 10% larger than amorphous GeSbTe, and the fact that free energies of both amorphous and crystalline GeSbTe have to be around the same magnitude, it was hypothesized from density functional theory simulations [ 11 ] that the most stable amorphous state was the spinel structure, where Ge occupies tetrahedral positions and Sb and Te occupy octahedral positions, as the ground state energy was the lowest of all the possible configurations. By means of Car-Parrinello molecular dynamics simulations this conjecture have been theoretically confirmed. [ 12 ]
Another similar material is AgInSbTe . It offers higher linear density, but has lower overwrite cycles by 1-2 orders of magnitude. It is used in groove-only recording formats, often in rewritable CDs . AgInSbTe is known as a growth-dominated material while GeSbTe is known as a nucleation-dominated material. In GeSbTe, the nucleation process of crystallization is long with many small crystalline nuclei being formed before a short growth process where the numerous small crystals are joined. In AgInSbTe, there are only a few nuclei formed in the nucleation stage and these nuclei grow bigger in the longer growth stage such that they eventually form one crystal. [ 13 ] | https://en.wikipedia.org/wiki/Ge2Sb2Te5 |
Germanium(IV) nitride is an inorganic compound with the chemical formula Ge 3 N 4 . It can be produced through the reaction of germanium and ammonia : [ 1 ]
In its pure state, germanium(IV) nitride is a colorless, inert solid that crystallizes in many polymorphs, of which the most stable is the trigonal β-form (space group P 31 c ). In this structure, the germanium atoms are tetrahedrally coordinated while the nitrogen atoms are trigonal planar. [ 2 ] The γ-form, which forms under high pressure, has a spinel structure . [ 3 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ge3N4 |
Germanium tetrabromide is the inorganic compound with the formula GeBr 4 . It is a colorless solid that melts near room temperature. It can be formed by treating solid germanium with bromine , or by treating a germanium-copper mixture with bromine: [ 2 ]
From this reaction, GeBr 4 has a heat of formation of 83.3 kcal/mol. [ 3 ]
The compound is liquid at 25 °C, and forms an interlocking liquid structure. [ 4 ] From room temperature down to −60 °C the structure takes on a cubic α form, whereas at lower temperatures it takes on a monoclinic β form.
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/GeBr4 |
Germanium(II) chloride
Germanium dichloride is a chemical compound of germanium and chlorine with the formula GeCl 2 . It is a yellow solid. Germanium dichloride is an example of a compound featuring germanium in the +2 oxidation state .
Solid germanium dichloride can be produced by comproportionation by passing germanium tetrachloride , GeCl 4 , over germanium metal at 300 °C and reduced pressure (0.1 mmHg). [ 1 ]
Germanium dichloride is also formed from the decomposition of trichlorogermane, GeHCl 3 , at 70 °C. Trichlorogermane is generated when germanium reacts with hydrogen chloride. [ 1 ] This reaction involves dehydrohalogenation .
Another route to germanium dichloride is the reduction of germanium tetrachloride with hydrogen at 800 °C. [ 1 ]
GeCl 2 is hydrolysed to give yellow germanium(II) hydroxide, which on warming gives brown germanium monoxide : [ 1 ]
Alkalizing a solution containing germanium(II) ions:
Germanium oxides and hydroxides are amphoteric.
Solutions of GeCl 2 in HCl are strongly reducing. [ 2 ] With chloride ion, ionic compounds containing the pyramidal GeCl − 3 ion have been characterised, for example [ 3 ] With rubidium and caesium chloride compounds, e.g. RbGeCl 3 are produced; these have distorted perovskite structures. [ 1 ]
Germanium dichloride reacts with tetraethylammonium chloride to give the trichlorogermanate : [ 4 ]
Molecular GeCl 2 is often called dichlorogermylene , highlighting its resemblance to a carbene . The structure of gas-phase molecular GeCl 2 shows that it is a bent molecule , as predicted by VSEPR theory. [ 5 ] The dioxane complex, GeCl 2 ·dioxane , has been used as a source of molecular GeCl 2 for reaction syntheses, as has the in situ reaction of GeCl 4 and Ge metal. GeCl 2 is quite reactive and inserts into many types of chemical bonds. [ 6 ] Usually, germanium dichloride is generated from germanium dichloride dioxane . | https://en.wikipedia.org/wiki/GeCl2 |
Germanium tetrachloride is a colourless, fuming liquid [ 4 ] with a peculiar, acidic odour. It is used as an intermediate in the production of purified germanium metal. In recent years, GeCl 4 usage has increased substantially due to its use as a reagent for fiber optic production.
Most commercial production of germanium is from treating flue-dusts of zinc- and copper-ore smelters, although a significant source is also found in the ash from the combustion of certain types of coal called vitrain . Germanium tetrachloride is an intermediate for the purification of germanium metal or its oxide, GeO 2 . [ 5 ]
Germanium tetrachloride can be generated directly from GeO 2 ( germanium dioxide ) by dissolution of the oxide in concentrated hydrochloric acid. The resulting mixture is fractionally distilled to purify and separate the germanium tetrachloride from other products and impurities. [ 6 ] The GeCl 4 can be rehydrolysed with deionized water to produce pure GeO 2 , which is then reduced under hydrogen to produce germanium metal. [ 5 ] [ 6 ]
Production of GeO 2 , however, is dependent on the oxidized form of germanium extracted from the ore. Copper-lead-sulfide and zinc-sulfide ores will produce GeS 2 , which is subsequently oxidized to GeO 2 with an oxidizer such as sodium chlorate . Zinc-ores are roasted and sintered and can produce the GeO 2 directly. The oxide is then processed as discussed above. [ 5 ]
The classic synthesis from chlorine and germanium metal at elevated temperatures is also possible. [ 7 ] [ 1 ] Additionally, a chlorine free activation of germanium has been developed, giving a less energy intensive and more environmentally friendly alternative synthesis for germanium precursors.
Germanium tetrachloride is used almost exclusively as an intermediate for several optical processes. GeCl 4 can be directly hydrolysed to GeO 2 , an oxide glass with several unique properties and applications, described below and in linked articles:
A notable derivative of GeCl 4 is germanium dioxide . In the manufacture of optical fibers , silicon tetrachloride , SiCl 4 , and germanium tetrachloride, GeCl 4 , are introduced with oxygen into a hollow glass preform, which is carefully heated to allow for oxidation of the reagents to their respective oxides and formation of a glass mixture. The GeO 2 has a high index of refraction, so by varying the flow rate of germanium tetrachloride the overall index of refraction of the optical fiber can be specifically controlled. The GeO 2 is about 4% by weight of the glass. [ 5 ] | https://en.wikipedia.org/wiki/GeCl4 |
Germanium difluoride ( GeF 2 ) is a chemical compound of germanium and fluorine . It is a white solid with a melting point of 110 °C, and can be produced by reacting germanium tetrafluoride with germanium powder at 150–300 °C. [ 2 ]
Germanium difluoride forms orthorhombic crystals with a space group P2 1 2 1 2 1 (No. 19), Pearson symbol oP12, and lattice constants a = 0.4682 nm, b = 0.5178 nm, c = 0.8312 nm, Z = 4 (four structure units per unit cell). Its crystal structure is characterized by strong polymeric chains composed by GeF 3 pyramids. One of the fluorine atom in the pyramid is shared by two neighboring chains, providing a weak link between them. [ 3 ] Another, less common crystal form of GeF 2 has tetragonal symmetry with a space group P4 1 2 1 2 (No. 92), Pearson symbol tP12, and lattice constants a = 0.487 nm, b = 0.6963 nm, c = 0.858 nm. [ 4 ] | https://en.wikipedia.org/wiki/GeF2 |
Germanium tetrafluoride ( GeF 4 ) is a chemical compound of germanium and fluorine . It is a colorless gas.
Germanium tetrafluoride is formed by treating germanium with fluorine:
Alternatively germanium dioxide combines with hydrofluoric acid (HF): [ 3 ]
It is also formed during the thermal decomposition of a complex salt, Ba[GeF 6 ]: [ 4 ]
Germanium tetrafluoride is a noncombustible, strongly fuming gas with a garlic-like odor. It reacts with water to form hydrofluoric acid and germanium dioxide. Decomposition occurs above 1000 °C. [ 5 ]
Reaction of GeF 4 with fluoride sources produces GeF 5 − anions with octahedral coordination around Ge atom due to polymerization. [ 6 ] The structural characterization of a discrete trigonal bipyramidal GeF 5 − anion was achieved by a "naked" fluoride reagent 1,3-bis(2,6-diisopropylphenyl)imidazolium fluoride. [ 7 ]
In combination with disilane, germanium tetrafluoride is used for in the synthesis of SiGe . [ 1 ] | https://en.wikipedia.org/wiki/GeF4 |
Germanane is a single-layer crystal composed of germanium with one hydrogen bonded in the z-direction for each atom, [ 1 ] in contrast to germanene which contains no hydrogen. In material science , great interest is shown in related single layered materials, such as graphene , composed of carbon, and silicene, composed of silicon . Such materials represent a new generation of semiconductors with potential applications in computer chips and solar cells . Germanane's structure is similar to graphane , and therefore graphene. Bulk germanium does not adopt this structure. Germanane has been produced in a two-step route starting with calcium germanide . From this material, the calcium is removed by de-intercalation with HCl to give a layered solid with the empirical formula GeH . [ 2 ] The Ca sites in Zintl phase CaGe 2 interchange with the H atoms in the HCl solution, which leaves GeH and CaCl 2 .
Germanane's electron mobility is predicted to be more than ten times that of silicon and five times more than conventional germanium. Hydrogen-doped germanane is chemically and physically stable when exposed to air and water. [ 2 ]
Germanane has a " direct band gap ", easily absorbing and emitting light, and potentially useful for optoelectronics . [ 3 ] (Conventional silicon and germanium have indirect band gaps, reducing light absorption or emission.) In addition, the Ge atoms have higher spin-orbit coupling (as compared to C in graphene/graphane) which can allow us to explore the quantum spin Hall effect.
Researchers at the University of Groningen in the Netherlands and the University of Ioannina in Greece, have reported on the first field effect transistor fabricated with germanane, highlighting its promising electronic and optoelectronic properties. [ 4 ] [ 5 ] Germanane FET's show transport in both electron and hole doped regimes with on/off current ratio of up to 10 5 (10 4 ) and carrier mobilities of 150 cm 2 (V.s) −1 (70 cm 2 (V.s) −1 ) at 77 K (room temperature). A significant enhancement of the device conductivity under illumination with 650 nm red laser is observed. | https://en.wikipedia.org/wiki/GeH |
Germane is the chemical compound with the formula Ge H 4 , and the germanium analogue of methane . It is the simplest germanium hydride and one of the most useful compounds of germanium. Like the related compounds silane and methane, germane is tetrahedral . It burns in air to produce GeO 2 and water . Germane is a group 14 hydride .
Germane has been detected in the atmosphere of Jupiter . [ 3 ]
Germane is typically prepared by reduction of germanium oxides, notably germanates , with hydride reagents such as sodium borohydride , potassium borohydride , lithium borohydride , lithium aluminium hydride , sodium aluminium hydride . The reaction with borohydrides is catalyzed by various acids and can be carried out in either aqueous or organic solvent . On laboratory scale, germane can be prepared by the reaction of Ge(IV) compounds with these hydride reagents. [ 4 ] [ 5 ] A typical synthesis involved the reaction of sodium germanate with potassium borohydride . [ 6 ]
Other methods for the synthesis of germane include electrochemical reduction and a plasma -based method. [ 7 ] The electrochemical reduction method involves applying voltage to a germanium metal cathode immersed in an aqueous electrolyte solution and an anode counter-electrode composed of a metal such as molybdenum or cadmium . In this method, germane and hydrogen gases evolve from the cathode while the anode reacts to form solid molybdenum oxide or cadmium oxides . The plasma synthesis method involves bombarding germanium metal with hydrogen atoms (H) that are generated using a high frequency plasma source to produce germane and digermane .
Germane is weakly acidic . In liquid ammonia GeH 4 is ionised forming NH 4 + and GeH 3 − . [ 8 ] With alkali metals in liquid ammonia GeH 4 reacts to give white crystalline MGeH 3 compounds. The potassium (potassium germyl or potassium trihydrogen germanide KGeH 3 ) and rubidium compounds (rubidium germyl or rubidium trihydrogen germanide RbGeH 3 ) have the sodium chloride structure implying a free rotation of the trihydrogen germanide anion GeH 3 − , the caesium compound, caesium germyl or caesium trihydrogen germanide CsGeH 3 in contrast has the distorted sodium chloride structure of TlI . [ 8 ]
The gas decomposes near 600K (327°C; 620°F) to germanium and hydrogen. Because of its thermal lability , germane is used in the semiconductor industry for the epitaxial growth of germanium by MOVPE or chemical beam epitaxy . [ 9 ] Organogermanium precursors (e.g. isobutylgermane , alkylgermanium trichlorides, and dimethylaminogermanium trichloride) have been examined as less hazardous liquid alternatives to germane for deposition of Ge-containing films by MOVPE. [ 10 ]
Germane is a highly flammable , potentially pyrophoric , [ 11 ] and a highly toxic gas. In 1970, the American Conference of Governmental Industrial Hygienists (ACGIH) published the latest changes and set the occupational exposure threshold limit value at 0.2 ppm for an 8-hour time weighted average. [ 12 ] The LC50 for rats at 1 hour of exposure is 622 ppm. [ 13 ] Inhalation or exposure may result in malaise, headache, dizziness, fainting, dyspnea, nausea, vomiting, kidney injury, and hemolytic effects. [ 14 ] [ 15 ] [ 16 ]
The US Department of Transportation hazard class is 2.3 Poisonous Gas. [ 12 ] | https://en.wikipedia.org/wiki/GeH4 |
GeNMR method (GEnerate NMR structures) is the first fully automated template-based method of protein structure determination that utilizes both NMR chemical shifts and NOE -based distance restraints. [ 1 ]
In addition to the template-based approach, the GeNMR webserver also offers an ab initio protein folding mode that starts folding from an extended structure. The GeNMR web server produces an ensemble of PDB coordinates within a period ranging from 20 minutes to 4 hours, depending on protein size, server load, quality and type of experimental information, and selected protocol options. GeNMR webserver is composed of two parts, a front-end web-interface (written in Perl and HTML) and a back-end consisting of eight different alignment, structure generation and structure optimization programs along with three local databases.
GeNMR accepts and processes backbone and side chain 1H, 13C or 15N chemical shift data of almost any combination (HA only, HN only, HA+HN only, HA+HN+sidechain H, CA only, CA+CB only, CA+CO only, HA+CA+CB, HN+CA+CB, HN+15N only, HN,+15N+CA, HN+15N+CA+CB, etc.). This allows GeNMR to handle small peptides (where only H shifts are typically measured) to large proteins (where only N or C shifts might be available).
As of 20009, the input files had to include chemical shift data in NMR-STAR 2.1 format and distance restraints in XPLOR/CNS file format. [ 1 ] The minimum sequence length is 30 residues.
The output for a typical GeNMR structure calculation consists of a user-defined set of lowest energy PDB coordinates in a simple, downloadable text format. In addition, details about the overall energy score (prior to and following energy minimization) and chemical shift correlations (between the observed and calculated shifts) is provided at the top of the output page. If score failed to decrease below a certain threshold, a warning is printed at the top of the page.
A flow chart describing the processing logic used in GeNMR is shown on the right. GeNMR makes use of a number of well-known programs and databases. These include Proteus2 to perform structural modeling, [ 2 ] PREDITOR to calculate torsion angles from chemical shifts, [ 3 ] PPT-DB for comparative modeling and alignment, [ 4 ] and CS23D to calculate protein structures from chemical shifts only. GeNMR also uses several well-known external programs, including Rosetta for ab initio folding without NOEs [ 5 ] and XPLOR-NIH for NOE-based simulated annealing and refinement. [ 6 ] A more complete list of GeNMR sub-programs is listed on the CS23D page.
GeNMR uses homology modeling and sequence/structure threading to rapidly generate a first-pass model of the query protein. The use of homology modeling/threading in GeNMR allows a considerable speed-up in its structure calculations since homology models can often be generated and refined in a minute or two.
GeNMR also makes use of genetic algorithms to allow configurational sampling and structural refinement using non-differentiable scores, such as ShiftX chemical shift scores. GeNMR's genetic algorithm creates a population of initial structures and then uses combinations of mutations, cross-overs, segment swaps and writhe movements to comprehensively sample conformation space. The 25 lowest energy structures are then selected, duplicated and carried to the next round of conformational sampling.
The potential functions used in GeNMR are derived from those used in CS23D [ 7 ] and Proteus2. [ 2 ] The knowledge-based potentials include information on predicted/known secondary structure, radius of gyration, hydrogen bond energies, number of hydrogen bonds, allowed backbone and side chain torsion angles, atom contact radii (bump checks), disulfide bonding information and a modified threading energy based on the Bryant and Lawrence potential. [ 8 ] The chemical shift component of the GeNMR potential uses weighted correlation coefficients calculated between the observed and SHIFTX calculated shifts of the structure being refined. [ 9 ]
There are six different kinds of calculation scenarios that GeNMR can currently accommodate. These scenarios include: | https://en.wikipedia.org/wiki/GeNMR |
Germanium monoxide ( chemical formula GeO) is a chemical compound of germanium and oxygen . It can be prepared as a yellow sublimate at 1000 °C by reacting GeO 2 with Ge metal. The yellow sublimate turns brown on heating to 650 °C. [ 1 ] GeO is not well characterised. [ 1 ] It is amphoteric , dissolving in acids to form germanium(II) salts and in alkali to form "trihydroxogermanates" or "germanites" containing the Ge(OH) 3 − ion. [ 2 ]
Germanium oxide decomposes to Ge and GeO 2 . [ 3 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/GeO |
Germanium dioxide , also called germanium(IV) oxide , germania , and salt of germanium , [ 1 ] is an inorganic compound with the chemical formula Ge O 2 . It is the main commercial source of germanium. It also forms as a passivation layer on pure germanium in contact with atmospheric oxygen.
The two predominant polymorphs of GeO 2 are hexagonal and tetragonal. Hexagonal GeO 2 has the same structure as α-quartz, with germanium having coordination number 4. Tetragonal GeO 2 (the mineral argutite ) has the rutile -like structure seen in stishovite . In this motif, germanium has the coordination number 6. An amorphous (glassy) form of GeO 2 is similar to fused silica . [ 2 ]
Germanium dioxide can be prepared in both crystalline and amorphous forms. At ambient pressure the amorphous structure is formed by a network of GeO 4 tetrahedra. At elevated pressure up to approximately 9 GPa the germanium average coordination number steadily increases from 4 to around 5 with a corresponding increase in the Ge–O bond distance. [ 3 ] At higher pressures, up to approximately 15 GPa , the germanium coordination number increases to 6, and the dense network structure is composed of GeO 6 octahedra. [ 4 ] When the pressure is subsequently reduced, the structure reverts to the tetrahedral form. [ 3 ] [ 4 ] At high pressure, the rutile form converts to an orthorhombic CaCl 2 form. [ 5 ]
Heating germanium dioxide with powdered germanium at 1000 °C forms germanium monoxide (GeO). [ 2 ]
The hexagonal ( d = 4.29 g/cm 3 ) form of germanium dioxide is more soluble than the rutile ( d = 6.27 g/cm 3 ) form and dissolves to form germanic acid, H 4 GeO 4 , or Ge(OH) 4 . [ 6 ] GeO 2 is only slightly soluble in acid but dissolves more readily in alkali to give germanates . [ 6 ] The germanic acid forms stable complexes with di- and polyfunctional carboxylic acids , poly-alcohols , and o-diphenols . [ 7 ]
In contact with hydrochloric acid , it releases the volatile and corrosive germanium tetrachloride .
The refractive index (1.7) and optical dispersion properties of germanium dioxide make it useful as an optical material for wide-angle lenses , in optical microscope objective lenses , and for the core of fiber-optic lines. See Optical fiber for specifics on the manufacturing process. Both germanium and its glass oxide, GeO 2 , are transparent to the infrared (IR) spectrum. The glass can be manufactured into IR windows and lenses, used for night-vision technology in the military, luxury vehicles, [ 8 ] and thermographic cameras . GeO 2 is preferred over other IR transparent glasses because it is mechanically strong and therefore preferred for rugged military usage. [ 9 ]
A mixture of silicon dioxide and germanium dioxide ("silica-germania") is used as an optical material for optical fibers and optical waveguides . [ 10 ] Controlling the ratio of the elements allows precise control of refractive index. Silica-germania glasses have lower viscosity and higher refractive index than pure silica. Germania replaced titania as the silica dopant for silica fiber, eliminating the need for subsequent heat treatment, which made the fibers brittle. [ 11 ]
Germanium dioxide is used as a colorant in borosilicate glass, used in lampworking. When combined with copper oxide, it provides a more stable red. It gives the glass a very reactive/changeable color, “a wonderful rainbow effect” when combined with silver oxide, that can shift light amber to a somewhat reddish and even deep purple appearance. The color can vary based on flame chemistry of the flame used to melt the glass (whether it has more oxygen or whether it has more fuel) And also it can change colors depending on the temperature of the kiln used to anneal the glass. [ 12 ]
Germanium dioxide is also used as a catalyst in production of polyethylene terephthalate resin, [ 13 ] and for production of other germanium compounds. It is used as a feedstock for production of some phosphors and semiconductor materials .
Germanium dioxide is used in algaculture as an inhibitor of unwanted diatom growth in algal cultures, since contamination with the comparatively fast-growing diatoms often inhibits the growth of or outcompetes the original algae strains. GeO 2 is readily taken up by diatoms and leads to silicon being substituted by germanium in biochemical processes within the diatoms, causing a significant reduction of the diatoms' growth rate or even their complete elimination, with little effect on non-diatom algal species. For this application, the concentration of germanium dioxide typically used in the culture medium is between 1 and 10 mg/L, depending on the stage of the contamination and the species. [ 14 ]
Germanium dioxide has low toxicity, but it is nephrotoxic in higher doses. [ citation needed ]
Germanium dioxide is used as a germanium supplement in some questionable dietary supplements and "miracle cures". [ 15 ] High doses of these resulted in several cases of germanium poisonings. | https://en.wikipedia.org/wiki/GeO2 |
Germanium dioxide Germanium diselenide Germanium monosulfide Lead disulfide Silicon sulfide Tin disulfide
Germanium disulfide or Germanium(IV) sulfide is the inorganic compound with the formula Ge S 2 . It is a white high-melting crystalline solid. [ 1 ] [ 2 ] The compound is a 3-dimensional polymer, [ 3 ] [ 4 ] in contrast to silicon disulfide , which is a one-dimensional polymer. The Ge-S distance is 2.19 Å. [ 3 ]
Germanium disulfide was first found in samples of argyrodite . The fact that germanium sulfide does not dissolve in aqueous acid facilitated its isolation. [ 5 ]
Germanium disulfide is produced by treating a solution of germanium tetrachloride in a concentrated hydrochloric acid solution with hydrogen sulfide . It precipitates as a white solid. [ 6 ]
It is insoluble in water, it dissolves in aqueous solutions of sodium sulfide owing to the formation of thiogermanates:
Natural GeS 2 is restricted to fumaroles of some burning coal-mining waste heaps. [ 7 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/GeS2 |
GeSbTe ( germanium-antimony-tellurium or GST ) is a phase-change material from the group of chalcogenide glasses used in rewritable optical discs and phase-change memory applications. Its recrystallization time is 20 nanoseconds, allowing bitrates of up to 35 Mbit /s to be written and direct overwrite capability up to 10 6 cycles. It is suitable for land-groove recording formats. It is often used in rewritable DVDs . New phase-change memories are possible using n-doped GeSbTe semiconductor . The melting point of the alloy is about 600 °C (900 K) and the crystallization temperature is between 100 and 150 °C.
During writing, the material is erased, initialized into its crystalline state, with low-intensity laser irradiation. The material heats up to its crystallization temperature, but not its melting point, and crystallizes. The information is written at the crystalline phase, by heating spots of it with short (<10 ns), high-intensity laser pulses; the material melts locally and is quickly cooled, remaining in the amorphous phase. As the amorphous phase has lower reflectivity than the crystalline phase, data can be recorded as dark spots on the crystalline background. Recently, novel liquid organogermanium precursors, such as isobutylgermane [ 1 ] [ 2 ] [ 3 ] (IBGe) and tetrakis(dimethylamino)germane [ 4 ] [ 5 ] (TDMAGe) were developed and used in conjunction with the metalorganics of antimony and tellurium , such as tris-dimethylamino antimony (TDMASb) and di-isopropyl telluride (DIPTe) respectively, to grow GeSbTe and other chalcogenide films of very high purity by metalorganic chemical vapor deposition (MOCVD). Dimethylamino germanium trichloride [ 6 ] (DMAGeC) is also reported as the chloride containing and superior dimethylaminogermanium precursor for Ge deposition by MOCVD.
GeSbTe is a ternary compound of germanium , antimony , and tellurium , with composition GeTe-Sb 2 Te 3 . In the GeSbTe system, there is a pseudo-line as shown upon which most of the alloys lie. Moving down this pseudo-line, it can be seen that as we go from Sb 2 Te 3 to GeTe, the melting point and glass transition temperature of the materials increase, crystallization speed decreases and data retention increases. Hence, in order to get high data transfer rate, we need to use material with fast crystallization speed such as Sb 2 Te 3 . This material is not stable because of its low activation energy. On the other hand, materials with good amorphous stability like GeTe has slow crystallization speed because of its high activation energy. In its stable state, crystalline GeSbTe has two possible configurations: hexagonal and a metastable face-centered cubic (FCC) lattice. When it is rapidly crystallized however, it was found to have a distorted rocksalt structure. GeSbTe has a glass transition temperature of around 100 °C. [ 7 ] GeSbTe also has many vacancy defects in the lattice, of 20 to 25% depending on the specific GeSbTe compound. Hence, Te has an extra lone pair of electrons, which are important for many of the characteristics of GeSbTe. Crystal defects are also common in GeSbTe and due to these defects, an Urbach tail in the band structure is formed in these compounds. GeSbTe is generally p type and there are many electronic states in the band gap accounting for acceptor and donor like traps. GeSbTe has two stable states, crystalline and amorphous. The phase change mechanism from high resistance amorphous phase to low resistance crystalline phase in nano-timescale and threshold switching are two of the most important characteristic of GeSbTe.
The unique characteristic that makes phase-change memory useful as a memory is the ability to effect a reversible phase change when heated or cooled, switching between stable amorphous and crystalline states. These alloys have high resistance in the amorphous state ‘0’ and are semimetals in the crystalline state ‘1’. In amorphous state, the atoms have short-range atomic order and low free electron density. The alloy also has high resistivity and activation energy. This distinguishes it from the crystalline state having low resistivity and activation energy, long-range atomic order and high free electron density. When used in phase-change memory, use of a short, high amplitude electric pulse such that the material reaches melting point and rapidly quenched changes the material from crystalline phase to amorphous phase is widely termed as RESET current and use of a relatively longer, low amplitude electric pulse such that the material reaches only the crystallization point and given time to crystallize allowing phase change from amorphous to crystalline is known as SET current.
The early devices were slow, power consuming and broke down easily due to the large currents. Therefore, it did not succeed as SRAM and flash memory took over. In the 1980s though, the discovery of germanium-antimony-tellurium (GeSbTe) meant that phase-change memory now needed less time and power to function. This resulted in the success of the rewriteable optical disk and created renewed interest in the phase-change memory. The advances in lithography also meant that previously excessive programming current has now become much smaller as the volume of GeSbTe that changes phase is reduced.
Phase-change memory has many near ideal memory qualities such as non-volatility , fast switching speed, high endurance of more than 10 13 read –write cycles, non-destructive read, direct overwriting and long data retention time of more than 10 years. The one advantage that distinguishes it from other next generation non-volatile memory like magnetic random access memory (MRAM) is the unique scaling advantage of having better performance with smaller sizes. The limit to which phase-change memory can be scaled is hence limited by lithography at least until 45 nm. Thus, it offers the biggest potential of achieving ultra-high memory density cells that can be commercialized.
Though phase-change memory offers much promise, there are still certain technical problems that need to be solved before it can reach ultra-high density and commercialized. The most important challenge for phase-change memory is to reduce the programming current to the level that is compatible with the minimum MOS transistor drive current for high-density integration. Currently, the programming current in phase-change memory is substantially high. This high current limits the memory density of the phase-change memory cells as the current supplied by the transistor is not sufficient due to their high current requirement. Hence, the unique scaling advantage of phase-change memory cannot be fully utilized.
The typical phase-change memory device design is shown. It has layers including the top electrode, GST, the GeSbTe layer, BEC, the bottom electrode and the dielectric layers. The programmable volume is the GeSbTe volume that is in contact with the bottom electrode. This is the part that can be scaled down with lithography. The thermal time constant of the device is also important. The thermal time constant must be fast enough for GeSbTe to cool rapidly into the amorphous state during RESET but slow enough to allow crystallization to occur during SET state. The thermal time constant depends on the design and material the cell is built. To read, a low current pulse is applied to the device. A small current ensures the material does not heat up. Information stored is read out by measuring the resistance of the device.
Threshold switching occurs when GeSbTe goes from a high resistive state to a conductive state at the threshold field of about 56 V/um. [ 8 ] This can be seen from the current - voltage (IV) plot, where current is very low in the amorphous state at low voltage until threshold voltage is reached. Current increases rapidly after the voltage snapback . The material is now in the amorphous "ON" state, where the material is still amorphous, but in a pseudo-crystalline electric state. In crystalline state, the IV characteristics is ohmic . There had been debate on whether threshold switching was an electrical or thermal process. There were suggestions that the exponential increase in current at threshold voltage must have been due to generation of carriers that vary exponentially with voltage such as impact ionization or tunneling . [ 9 ]
Recently, much research has focused on the material analysis of the phase-change material in an attempt to explain the high speed phase change of GeSbTe. Using EXAFS , it was found that the most matching model for crystalline GeSbTe is a distorted rocksalt lattice and for amorphous a tetrahedral structure. The small change in configuration from distorted rocksalt to tetrahedral suggests that nano-timescale phase change is possible [ 10 ] as the major covalent bonds are intact and only the weaker bonds are broken.
Using the most possible crystalline and amorphous local structures for GeSbTe, the fact that density of crystalline GeSbTe is less than 10% larger than amorphous GeSbTe, and the fact that free energies of both amorphous and crystalline GeSbTe have to be around the same magnitude, it was hypothesized from density functional theory simulations [ 11 ] that the most stable amorphous state was the spinel structure, where Ge occupies tetrahedral positions and Sb and Te occupy octahedral positions, as the ground state energy was the lowest of all the possible configurations. By means of Car-Parrinello molecular dynamics simulations this conjecture have been theoretically confirmed. [ 12 ]
Another similar material is AgInSbTe . It offers higher linear density, but has lower overwrite cycles by 1-2 orders of magnitude. It is used in groove-only recording formats, often in rewritable CDs . AgInSbTe is known as a growth-dominated material while GeSbTe is known as a nucleation-dominated material. In GeSbTe, the nucleation process of crystallization is long with many small crystalline nuclei being formed before a short growth process where the numerous small crystals are joined. In AgInSbTe, there are only a few nuclei formed in the nucleation stage and these nuclei grow bigger in the longer growth stage such that they eventually form one crystal. [ 13 ] | https://en.wikipedia.org/wiki/GeSbTe |
Germanium monoselenide is a chemical compound with the formula GeSe. It exists as black crystalline powder having orthorhombic (distorted NaCl -type) crystal symmetry; at temperatures ~650 °C, it transforms into the cubic NaCl structure. [ 3 ] GeSe has been shown to have stereochemically active Ge 4s lone pairs that are responsible for the distorted structure and the relatively high position of the valence band maximum with respect to the vacuum level. [ 4 ]
To grow GeSe crystals, GeSe powder is vaporized at the hot end of a sealed ampule and allowed to condense at the cold end. Usual crystals are small and show signs of irregular growth, caused mainly by convective motion in the gaseous medium. However, GeSe grown under condition of zero-gravity and reduced convection aboard the Skylab are ~10 times larger than Earth-grown crystals, and are free from visual defects. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/GeSe |
Germanium telluride (GeTe) is a chemical compound of germanium and tellurium and is a component of chalcogenide glass . It shows semimetallic conduction and ferroelectric behaviour. [ 3 ]
Germanium telluride exists in three major crystalline forms, room-temperature α ( rhombohedral ) and γ ( orthorhombic ) structures and high-temperature β ( cubic , rocksalt-type) phase; α phase being most phase for pure GeTe below the ferroelectric Curie temperature of approximately 670 K (746 °F; 397 °C). [ 4 ] [ 5 ]
Doped germanium telluride is a low temperature superconductor. [ 6 ]
Solid GeTe can transform between amorphous and crystalline states. The crystalline state has a low resistivity (semiconducting at room temperature) and the amorphous state has a high resistivity. [ 7 ] The difference in resistivity can be up to six orders of magnitude depending on the film quality, GeTe compositions, and nucleation site formation. [ 7 ] [ 8 ] The drastic changes in the properties of the material have been exploited in data storage applications. The phase transitions of GeTe can be fast, reversible and repeatable, with drastic property changes, making GeTe a promising candidate in applications like radio frequency (RF) switching and direct current (DC) switching. [ 8 ] Research on mechanisms that relate the phase transition and radio frequency (RF) switching is underway, with a promising future in optimization for telecommunication applications. [ 8 ] Although both solid states can exist at room temperatures, the transition requires a specific heating and cooling process known as the thermal actuation method. [ 8 ] To achieve the amorphous state the solid is heated up beyond the melting temperature with a high current pulse in a short amount of time and rapidly quenched or cooled down. Crystallization happens when the GeTe is heated to a crystallization temperature lower than the melting temperature with a relatively longer and lower current pulse, and a slow quenching process with the current gradually reduced. [ 8 ] Both direct and indirect heating can induce phase changes. [ 8 ] Joule heating approach is the common direct heating method and indirect heating can be accomplished by a separate layer of dielectric material added to the RF switch. [ 8 ] The crystal structure of GeTe is rhombohedrally distorted rock salt-type structure that forms a face-centered cubic (FCC) sublattice at room temperature. [ 8 ]
Semiconducting GeTe nanowires (NW) and nanohelices (NH) are synthesized via vapor transport method, with metal nanoparticle catalysts. GeTe was evaporated and carried by Ar gas at optimum temperature, pressure, time, and gas flow rate to the downstream collecting/grow site (SiO 2 surface coated with colloidal gold nanoparticles). High temperature over 500 °C produces thicker nanowires and crystalline chunks. Au is essential to the growth of NW and NH and is suggested to the metal catalyst of the reaction. This method gives rise to NW and NH with a 1:1 ratio of Ge and Te. NW produced by this method average about 65 nm in diameter and up to 50 μm in length. NHs averages to 135 nm in helix diameter. [ 9 ]
The synthesis described above has not reached the sized required to exhibit quantum size effect. Nanostructures that reach the quantum regime exhibit a different set of phenomena unseen at a larger scale, for example, spontaneous polar ordering and the splitting of diffraction spots. The synthesis of GeTe nanocrystals of average size of 8, 17, and 100 nm involves divalent Ge(II) chloride – 1,4 dioxane complex and bis[bis(trimethylsilyl)amino]Ge (II) and trioctylphosphine-tellurium in a solvent such as 1,2-dichlorobenzene or phenyl ether. Ge(II) reduction kinetics has been thought to determine the GeTe formation. Large the Ge(II) reduction rate may lead to the increase in particle nucleation rate, resulting in the reduction of particle diameter. [ 10 ]
GeTe has been heavily used in non-volatile optical data storage such as CDs, DVDs, and Blu-ray and may replace dynamic and flash random access memories. In 1987, Yamada et al. explored the phase changing properties of GeTe and Sb 2 Te 3 for optical storage. The short crystallization time, cyclability and high optical contrast made these material better options than Te 81 Ge 15 Sb 2 S 2 which has a slow transition time. [ 8 ]
The high contrast in resistivity between the amorphous and crystalline states and the ability to reverse the transition repeatedly make GeTe a good candidate for RF switching. RF requires a thin layer of GeTe film to be deposited on the surface of the substrate. Seed layer structure, precursor composition, deposition temperature, pressure, gas flow rates, precursor bubbling temperatures and the substrates all play a role in the film properties. [ 8 ] | https://en.wikipedia.org/wiki/GeTe |
geWorkbench [ 2 ] (genomics Workbench) is an open-source software platform for integrated genomic data analysis. It is a desktop application written in the programming language Java . geWorkbench uses a component architecture. As of 2016 [update] , there are more than 70 plug-ins [ 3 ] available, providing for the visualization and analysis of gene expression , sequence, and structure data.
geWorkbench is the Bioinformatics platform of MAGNet, [ 4 ] the National Center for the Multi-scale Analysis of Genomic and Cellular Networks, one of the 8 National Centers for Biomedical Computing [ 5 ] funded through the NIH Roadmap ( NIH Common Fund [ 6 ] ). Many systems and structure biology tools developed by MAGNet investigators are available as geWorkbench plugins.
Demonstrations of each feature described can be found at GeWorkbench-web Tutorials. [ 7 ] | https://en.wikipedia.org/wiki/GeWorkbench |
A gear [ 1 ] [ 2 ] or gearwheel [ 3 ] [ 4 ] [ 5 ] is a rotating machine part typically used to transmit rotational motion and/or torque by means of a series of teeth that engage with compatible teeth of another gear or other part. The teeth can be integral saliences or cavities machined on the part, or separate pegs inserted into it. In the latter case, the gear is usually called a cogwheel . A cog may be one of those pegs [ 6 ] [ 7 ] [ 8 ] or the whole gear. [ 9 ] [ 6 ] [ 8 ] Two or more meshing gears are called a gear train .
The smaller member of a pair of meshing gears is often called pinion . Most commonly, gears and gear trains can be used to trade torque for rotational speed between two axles or other rotating parts and/or to change the axis of rotation and/or to invert the sense of rotation. A gear may also be used to transmit linear force and/or linear motion to a rack , a straight bar with a row of compatible teeth.
Gears are among the most common mechanical parts. They come in a great variety of shapes and materials, and are used for many different functions and applications. Diameters may range from a few μm in micromachines , [ 10 ] to a few mm in watches and toys to over 10 metres in some mining equipment. [ 11 ] Other types of parts that are somewhat similar in shape and function to gears include the sprocket , which is meant to engage with a link chain instead of another gear, and the timing pulley , meant to engage a timing belt . Most gears are round and have equal teeth, designed to operate as smoothly as possible; but there are several applications for non-circular gears , and the Geneva drive has an extremely uneven operation, by design.
Gears can be seen as instances of the basic lever "machine". [ 12 ] When a small gear drives a larger one, the mechanical advantage of this ideal lever causes the torque T to increase but the rotational speed ω to decrease. The opposite effect is obtained when a large gear drives a small one. The changes are proportional to the gear ratio r , the ratio of the tooth counts: namely, T 2 / T 1 = r = N 2 / N 1 , and ω 2 / ω 1 = 1 / r = N 1 / N 2 . Depending on the geometry of the pair, the sense of rotation may also be inverted (from clockwise to anti-clockwise, or vice versa).
Most vehicles have a transmission or "gearbox" containing a set of gears that can be meshed in multiple configurations. The gearbox lets the operator vary the torque that is applied to the wheels without changing the engine's speed. Gearboxes are used also in many other machines, such as lathes and conveyor belts . In all those cases, terms like "first gear", "high gear", and "reverse gear" refer to the overall torque ratios of different meshing configurations, rather than to specific physical gears. These terms may be applied even when the vehicle does not actually contain gears, as in a continuously variable transmission . [ 13 ]
The oldest functioning gears by far are not man made, but are seen in the hind legs of the nymphs of the planthopper insect Issus coleoptratus .
The earliest man-made gears that have not been lost or destroyed date to 4th century BC China [ 14 ] (Zhan Guo times – Late East Zhou dynasty ), which have been preserved at the Luoyang Museum of Henan Province, China .
In Europe, Aristotle mentions gears around 330 BC, as wheel drives in windlasses . He observed that the direction of rotation is reversed when one gear wheel drives another gear wheel. Philon of Byzantium was one of the first who used gears in water raising devices. [ 15 ] Gears appear in works connected to Hero of Alexandria , in Roman Egypt circa AD 50, [ 16 ] but can be traced back to the mechanics of the Library of Alexandria in 3rd-century BC Ptolemaic Egypt , and were greatly developed by the Greek polymath Archimedes (287–212 BC). [ 17 ] The earliest surviving gears in Europe were found in the Antikythera mechanism an example of a very early and intricate geared device, designed to calculate astronomical positions of the sun, moon, and planets, and predict eclipses . Its time of construction is now estimated between 150 and 100 BC. [ 18 ] [ 19 ] [ 20 ]
The Chinese engineer Ma Jun ( c. 200 –265) described a south-pointing chariot . A set of differential gears connected to the wheels and to a pointer on top of the chariot kept the direction of latter unchanged as the chariot turned. [ 21 ]
Another early surviving example of geared mechanism is a complex calendrical device showing the phase of the Moon, the day of the month and the places of the Sun and the Moon in the Zodiac was invented in the Byzantine empire in the early 6th century. [ 22 ] [ 23 ]
Geared mechanical water clocks were built in China by 725. [ citation needed ]
Around 1221, a geared astrolabe was built in Isfahan showing the position of the moon in the zodiac and its phase , and the number of days since new moon. [ 24 ]
The worm gear was invented in the Indian subcontinent , for use in roller cotton gins , some time during the 13th–14th centuries. [ 25 ]
A complex astronomical clock, called the Astrarium , was built between 1348 and 1364 by Giovanni Dondi dell'Orologio . It had seven faces and 107 moving parts; it showed the positions of the sun, the moon and the five planets then known, as well as religious feast days. [ 26 ] The Salisbury Cathedral clock , built in 1386, it is the world's oldest still working geared mechanical clock.
Differential gears were used by the British clock maker Joseph Williamson in 1720. [ citation needed ]
The word gear is probably from Old Norse gørvi (plural gørvar ) 'apparel, gear,' related to gøra , gørva 'to make, construct, build; set in order, prepare,' a common verb in Old Norse, "used in a wide range of situations from writing a book to dressing meat". In this context, the meaning of 'toothed wheel in machinery' first attested 1520s; specific mechanical sense of 'parts by which a motor communicates motion' is from 1814; specifically of a vehicle (bicycle, automobile, etc.) by 1888. [ 27 ]
A cog is a tooth on a wheel. From Middle English cogge, from Old Norse (compare Norwegian kugg ('cog'), Swedish kugg , kugge ('cog, tooth')), from Proto-Germanic * kuggō (compare Dutch kogge (' cogboat '), German Kock ), from Proto-Indo-European * gugā ('hump, ball') (compare Lithuanian gugà ('pommel, hump, hill'), from PIE * gēw- ('to bend, arch'). [ 28 ] First used c. 1300 in the sense of 'a wheel having teeth or cogs; late 14c., 'tooth on a wheel'; cog-wheel, early 15c. [ 29 ]
The gears of the Antikythera mechanism are made of bronze , and the earliest surviving Chinese gears are made of iron. These metals, as well as tin , have been generally used for clocks and similar mechanisms to this day.
Historically, large gears, such as those used in flour mills , were commonly made of wood rather than metal. They were cogwheels, made by inserting a series of wooden pegs or cogs around the rim of a wheel. The cogs were often made of maple wood.
Wooden gears have been gradually replaced by ones made or metal, such as cast iron at first, then steel and aluminum . Steel is most commonly used because of its high strength-to-weight ratio and low cost. Aluminum is not as strong as steel for the same geometry, but is lighter and easier to machine. Powder metallurgy may be used with alloys that cannot be easily cast or machined.
Still, because of cost or other considerations, some early metal gears had wooden cogs, each tooth forming a type of specialised 'through' mortise and tenon joint [ 30 ]
More recently engineering plastics and composite materials have been replacing metals in many applications, especially those with moderate speed and torque. They are not as strong as steel, but are cheaper, can be mass-manufactured by injection molding , [ 31 ] and don't need lubrication. Plastic gears can even be intentionally designed to be the weakest part in a mechanism, so that in case of jamming they will fail first and thus avoid damage to more expensive parts. Such "sacrificial" gears may be a simpler alternative to other overload-protection devices such as clutches and torque-limited or current-limited motors.
In spite of the advantages of metal and plastic, wood continued to be used for large gears until a couple of centuries ago, because of cost, weight, tradition, or other considerations. In 1967 the Thompson Manufacturing Company of Lancaster, New Hampshire still had a very active business in supplying tens of thousands of maple gear teeth per year, mostly for use in paper mills and grist mills , some dating back over 100 years. [ 32 ]
The most common techniques for gear manufacturing are dies , sand , and investment casting ; injection molding ; powder metallurgy ; blanking ; and gear cutting .
As of 2014, an estimated 80% of all gearing produced worldwide is produced by net shape molding. Molded gearing is usually powder metallurgy, plastic injection, or metal die casting. [ 33 ] Gears produced by powder metallurgy often require a sintering step after they are removed from the mold. Cast gears require gear cutting or other machining to shape the teeth to the necessary precision. The most common form of gear cutting is hobbing , but gear shaping , milling , and broaching may be used instead.
Metal gears intended for heavy duty operation, such as in the transmissions of cars and trucks, the teeth are heat treated to make them hard and more wear resistant while leaving the core soft but tough . For large gears that are prone to warp, a quench press is used.
Gears can be made by 3D printing ; however, this alternative is typically used only for prototypes or very limited production quantities, because of its high cost, low accuracy, and relatively low strength of the resulting part.
Besides gear trains, other alternative methods of transmitting torque between non-coaxial parts include link chains driven by sprockets, friction drives , belts and pulleys , hydraulic couplings , and timing belts .
One major advantage of gears is that their rigid body and the snug interlocking of the teeth ensure precise tracking of the rotation across the gear train, limited only by backlash and other mechanical defects. For this reason they are favored in precision applications such as watches. Gear trains also can have fewer separate parts (only two) and have minimal power loss, minimal wear, and long life. Gears are also often the most efficient and compact way of transmitting torque between two non-parallel axes.
On the other hand, gears are more expensive to manufacture, may require periodic lubrication, and may have greater mass and rotational inertia than the equivalent pulleys. More importantly, the distance between the axes of matched gears is limited and cannot be changed once they are manufactured. There are also applications where slippage under overload or transients (as occurs with belts, hydraulics, and friction wheels) is not only acceptable but desirable.
For basic analysis purposes, each gear can be idealized as a perfectly rigid body that, in normal operation, turns around a rotation axis that is fixed in space, without sliding along it. Thus, each point of the gear can move only along a circle that is perpendicular to its axis and centered on it. At any moment t , all points of the gear will be rotating around that axis with the same angular speed ω ( t ), in the same sense. The speed need not be constant over time.
The action surface of the gear consists of all points of its surface that, in normal operation, may contact the matching gear with positive pressure . All other parts of the surface are irrelevant (except that they cannot be crossed by any part of the matching gear). In a gear with N teeth, the working surface has N -fold rotational symmetry about the axis, meaning that it is congruent with itself when the gear rotates by 1 / N of a turn.
If the gear is meant to transmit or receive torque with a definite sense only (clockwise or counterclockwise with respect to some reference viewpoint), the action surface consists of N separate patches, the tooth faces ; which have the same shape and are positioned in the same way relative to the axis, spaced 1 / N turn apart.
If the torque on each gear may have both senses, the action surface will have two sets of N tooth faces; each set will be effective only while the torque has one specific sense, and the two sets can be analyzed independently of the other. However, in this case the gear usually has also "flip over" symmetry, so that the two sets of tooth faces are congruent after the gear is flipped. This arrangement ensures that the two gears are firmly locked together, at all times, with no backlash .
During operation, each point p of each tooth face will at some moment contact a tooth face of the matching gear at some point q of one of its tooth faces. At that moment and at those points, the two faces must have the same perpendicular direction but opposite orientation. But since the two gears are rotating around different axes, the points p and q are moving along different circles; therefore, the contact cannot last more than one instant, and p will then either slide across the other face, or stop contacting it altogether.
On the other hand, at any given moment there is at least one such pair of contact points; usually more than one, even a whole line or surface of contact.
Actual gears deviate from this model in many ways: they are not perfectly rigid, their mounting does not ensure that the rotation axis will be perfectly fixed in space, the teeth may have slightly different shapes and spacing, the tooth faces are not perfectly smooth, and so on. Yet, these deviations from the ideal model can be ignored for a basic analysis of the operation of a gear set.
One criterion for classifying gears is the relative position and direction of the axes or rotation of the gears that are to be meshed together.
In the most common configuration, the axes of rotation of the two gears are parallel, and usually their sizes are such that they contact near a point between the two axes. In this configuration, the two gears turn in opposite senses.
Occasionally the axes are parallel but one gear is nested inside the other. In this configuration, both gears turn in the same sense.
If the two gears are cut by an imaginary plane perpendicular to the axes, each section of one gear will interact only with the corresponding section of the other gear. Thus the three-dimensional gear train can be understood as a stack of gears that are flat and infinitesimally thin — that is, essentially two-dimensional.
In a crossed arrangement, the axes of rotation of the two gears are not parallel but cross at an arbitrary angle except zero or 180 degrees.
For best operation, each wheel then must be a bevel gear , whose overall shape is like a slice ( frustum ) of a cone whose apex is the meeting point of the two axes.
Bevel gears with equal numbers of teeth and shaft axes at 90 degrees are called miter (US) or mitre (UK) gears.
Independently of the angle between the axes, the larger of two unequal matching bevel gears may be internal or external, depending the desired relative sense of rotation. [ 34 ]
If the two gears are sliced by an imaginary sphere whose center is the point where the two axes cross, each section will remain on the surface of that sphere as the gear rotates, and the section of one gear will interact only with the corresponding section of the other gear. In this way, a pair of meshed 3D gears can be understood as a stack of nested infinitely thin cup-like gears.
The gears in a matching pair are said to be skew if their axes of rotation are skew lines -- neither parallel nor intersecting.
In this case, the best shape for each pitch surface is neither cylindrical nor conical but a portion of a hyperboloid of revolution. [ 35 ] [ 36 ] Such gears are called hypoid for short. Hypoid gears are most commonly found with shafts at 90 degrees.
Contact between hypoid gear teeth may be even smoother and more gradual than with spiral bevel gear teeth, but also have a sliding action along the meshing teeth as it rotates and therefore usually require some of the most viscous types of gear oil to avoid it being extruded from the mating tooth faces, the oil is normally designated HP (for hypoid) followed by a number denoting the viscosity. Also, the pinion can be designed with fewer teeth than a spiral bevel pinion, with the result that gear ratios of 60:1 and higher are feasible using a single set of hypoid gears. [ 37 ] This style of gear is most common in motor vehicle drive trains, in concert with a differential . Whereas a regular (nonhypoid) ring-and-pinion gear set is suitable for many applications, it is not ideal for vehicle drive trains because it generates more noise and vibration than a hypoid does. Bringing hypoid gears to market for mass-production applications was an engineering improvement of the 1920s.
A gear is said to be external if its teeth are directed generally away from the rotation axis, and internal otherwise. [ 34 ] In a pair of matching wheels, only one of them (the larger one) may be internal.
A crown gear or contrate gear is one whose teeth project at right angles to the plane. A crown gear is also sometimes meshed with an escapement such as found in mechanical clocks.
Gear teeth typically extend across the whole thickness of the gear. Another criterion for classifying gears is the general direction of the teeth across that dimension. This attribute is affected by the relative position and direction of the axes or rotation of the gears that are to be meshed together.
In a cylindrical spur gear or straight-cut gear , the tooth faces are straight along the direction parallel to the axis of rotation. Any imaginary cylinder with the same axis will cut the teeth along parallel straight lines.
The teeth can be either internal or external. Two spur gears mesh together correctly only if fitted to parallel shafts. [ 38 ] No axial thrust is created by the tooth loads. Spur gears are excellent at moderate speeds but tend to be noisy at high speeds. [ 39 ]
For arrangements with crossed non-parallel axes, the faces in a straight-cut gear are parts of a general conical surface whose generating lines ( generatrices ) go through the meeting point of the two axes, resulting in a bevel gear . Such gears are generally used only at speeds below 5 m/s (980 ft/min), or, for small gears, 1000 rpm . [ 40 ]
In a helical or dry fixed gear the tooth walls are not parallel to the axis of rotation, but are set at an angle. An imaginary pitch surface (cylinder, cone, or hyperboloid, depending on the relative axis positions) intersects each tooth face along an arc of a helix . Helical gears can be meshed in either parallel or crossed orientations. The former refers to when the shafts are parallel to each other; this is the most common orientation. In the latter, the shafts are non-parallel, and in this configuration the gears are sometimes known as "skew gears".
The angled teeth engage more gradually than do spur gear teeth, causing them to run more smoothly and quietly. [ 41 ] With parallel helical gears, each pair of teeth first make contact at a single point at one side of the gear wheel; a moving curve of contact then grows gradually across the tooth face to a maximum, then recedes until the teeth break contact at a single point on the opposite side. In spur gears, teeth suddenly meet at a line contact across their entire width, causing stress and noise. Spur gears make a characteristic whine at high speeds. For this reason spur gears are used in low-speed applications and in situations where noise control is not a problem, and helical gears are used in high-speed applications, large power transmission, or where noise abatement is important. [ 42 ] The speed is considered high when the pitch line velocity exceeds 25 m/s. [ 43 ]
A disadvantage of helical gears is a resultant thrust along the axis of the gear, which must be accommodated by appropriate thrust bearings . However, this issue can be circumvented by using a herringbone gear or double helical gear , which has no axial thrust - and also provides self-aligning of the gears. This results in less axial thrust than a comparable spur gear.
A second disadvantage of helical gears is a greater degree of sliding friction between the meshing teeth, often addressed with additives in the lubricant.
For a "crossed" or "skew" configuration, the gears must have the same pressure angle and normal pitch; however, the helix angle and handedness can be different. The relationship between the two shafts is actually defined by the helix angle(s) of the two shafts and the handedness, as defined: [ 44 ]
where β {\displaystyle \beta } is the helix angle for the gear. The crossed configuration is less mechanically sound because there is only a point contact between the gears, whereas in the parallel configuration there is a line contact. [ 44 ]
Quite commonly, helical gears are used with the helix angle of one having the negative of the helix angle of the other; such a pair might also be referred to as having a right-handed helix and a left-handed helix of equal angles. The two equal but opposite angles add to zero: the angle between shafts is zero—that is, the shafts are parallel . Where the sum or the difference (as described in the equations above) is not zero, the shafts are crossed . For shafts crossed at right angles, the helix angles are of the same hand because they must add to 90 degrees. (This is the case with the gears in the illustration above: they mesh correctly in the crossed configuration: for the parallel configuration, one of the helix angles should be reversed. The gears illustrated cannot mesh with the shafts parallel.)
Double helical gears overcome the problem of axial thrust presented by single helical gears by using a double set of teeth, slanted in opposite directions. A double helical gear can be thought of as two mirrored helical gears mounted closely together on a common axle. This arrangement cancels out the net axial thrust, since each half of the gear thrusts in the opposite direction, resulting in a net axial force of zero. This arrangement can also remove the need for thrust bearings. However, double helical gears are more difficult to manufacture due to their more complicated shape.
Herringbone gears are a special type of helical gears. They do not have a groove in the middle like some other double helical gears do; the two mirrored helical gears are joined so that their teeth form a V shape. This can also be applied to bevel gears , as in the final drive of the Citroën Type A . Another type of double helical gear is a Wüst gear.
For both possible rotational directions, there exist two possible arrangements for the oppositely-oriented helical gears or gear faces. One arrangement is called stable, and the other unstable. In a stable arrangement, the helical gear faces are oriented so that each axial force is directed toward the center of the gear. In an unstable arrangement, both axial forces are directed away from the center of the gear. In either arrangement, the total (or net ) axial force on each gear is zero when the gears are aligned correctly. If the gears become misaligned in the axial direction, the unstable arrangement generates a net force that may lead to disassembly of the gear train, while the stable arrangement generates a net corrective force. If the direction of rotation is reversed, the direction of the axial thrusts is also reversed, so a stable configuration becomes unstable, and vice versa.
Stable double helical gears can be directly interchanged with spur gears without any need for different bearings.
Worms resemble screws . A worm is meshed with a worm wheel , which looks similar to a spur gear .
Worm-and-gear sets are a simple and compact way to achieve a high torque, low speed gear ratio. For example, helical gears are normally limited to gear ratios of less than 10:1 while worm-and-gear sets vary from 10:1 to 500:1. [ 45 ] A disadvantage is the potential for considerable sliding action, leading to low efficiency. [ 46 ]
A worm gear is a species of helical gear, but its helix angle is usually somewhat large (close to 90 degrees) and its body is usually fairly long in the axial direction. These attributes give it screw like qualities. The distinction between a worm and a helical gear is that at least one tooth persists for a full rotation around the helix. If this occurs, it is a 'worm'; if not, it is a 'helical gear'. A worm may have as few as one tooth. If that tooth persists for several turns around the helix, the worm appears, superficially, to have more than one tooth, but what one in fact sees is the same tooth reappearing at intervals along the length of the worm. The usual screw nomenclature applies: a one-toothed worm is called single thread or single start ; a worm with more than one tooth is called multiple thread or multiple start . The helix angle of a worm is not usually specified. Instead, the lead angle, which is equal to 90 degrees minus the helix angle, is given.
In a worm-and-gear set, the worm can always drive the gear. However, if the gear attempts to drive the worm, it may or may not succeed. Particularly if the lead angle is small, the gear's teeth may simply lock against the worm's teeth, because the force component circumferential to the worm is not sufficient to overcome friction. In traditional music boxes , however, the gear drives the worm, which has a large helix angle. This mesh drives the speed-limiter vanes which are mounted on the worm shaft.
Worm-and-gear sets that do lock are called self locking , which can be used to advantage, as when it is desired to set the position of a mechanism by turning the worm and then have the mechanism hold that position. An example is the machine head found on some types of stringed instruments .
If the gear in a worm-and-gear set is an ordinary helical gear only a single point of contact is achieved. [ 37 ] [ 47 ] If medium to high power transmission is desired, the tooth shape of the gear is modified to achieve more intimate contact by making both gears partially envelop each other. This is done by making both concave and joining them at a saddle point ; this is called a cone-drive [ 48 ] or "Double enveloping".
Worm gears can be right or left-handed, following the long-established practice for screw threads. [ 34 ]
Another criterion to classify gears is the tooth profile , the shape of the cross-section of a tooth face by an imaginary cut perpendicular to the pitch surface, such as the transverse, normal, or axial plane.
The tooth profile is crucial for the smoothness and uniformity of the movement of matching gears, as well as for the friction and wear.
The teeth of antique or artisanal gears that were cut by hand from sheet material, like those in the Antikhytera mechanism, generally had simple profiles, such as triangles. [ 49 ] The teeth of larger gears — such as used in windmills — were usually pegs with simple shapes like cylinders, parallelepipeds , or triangular prisms inserted into a smooth wooden or metal wheel; or were holes with equally simple shapes cut into such a wheel.
Because of their sub-optimal profile, the effective gear ratio of such artisanal matching gears was not constant, but fluctuated over each tooth cycle, resulting in vibrations, noise, and accelerated wear.
A cage gear , also called a lantern gear or lantern pinion , is one of those artisanal gears having cylindrical rods for teeth, parallel to the axle and arranged in a circle around it, much as the bars on a round bird cage or lantern. The assembly is held together by disks at each end, into which the tooth rods and axle are set. Cage gears are more efficient than solid pinions, [ citation needed ] and dirt can fall through the rods rather than becoming trapped and increasing wear. They can be constructed with very simple tools as the teeth are not formed by cutting or milling, but rather by drilling holes and inserting rods.
Sometimes used in clocks, a cage gear should always be driven by a gearwheel, not used as the driver. The cage gear was not initially favoured by conservative clock makers. It became popular in turret clocks where dirty working conditions were most commonplace. Domestic American clock movements often used them. [ citation needed ]
In most modern gears, the tooth profile is usually not straight or circular, but of special form designed to achieve a constant angular velocity ratio.
There is an infinite variety of tooth profiles that will achieve this goal. In fact, given a fairly arbitrary [ clarification needed ] tooth shape, it is possible to develop a tooth profile for the mating gear that will do it.
However, two constant velocity tooth profiles are the most commonly used in modern times for gears with parallel or crossed axes, based on the cycloid and involute curves.
Cycloidal gears were more common until the late 1800s. Since then, the involute has largely superseded it, particularly in drive train applications. The cycloid is in some ways the more interesting and flexible shape; however the involute has two advantages: it is easier to manufacture, and it permits the center-to-center spacing of the gears to vary over some range without ruining the constancy of the velocity ratio. Cycloidal gears only work properly if the center spacing is exactly right. Cycloidal gears are still commonly used in mechanical clocks.
For non-parallel axes with non-straight tooth cuts, the best tooth profile is one of several spiral bevel gear shapes. These include Gleason types (circular arc with non-constant tooth depth), Oerlikon and Curvex types (circular arc with constant tooth depth), Klingelnberg Cyclo-Palloid (Epicycloid with constant tooth depth) or Klingelnberg Palloid. [ 40 ]
The tooth faces in these gear types are not involute cylinders or cones but patches of octoidal surfaces . [ 50 ] Manufacturing such tooth faces may require a 5-axis milling machine .
Spiral bevel gears have the same advantages and disadvantages relative to their straight-cut cousins as helical gears do to spur gears, such as lower noise and vibration. [ 40 ] Simplified calculated bevel gears on the basis of an equivalent cylindrical gear in normal section with an involute tooth form show a deviant tooth form with reduced tooth strength by 10-28% without offset and 45% with offset. [ 51 ]
A rack is a toothed bar or rod that can be thought of as a sector gear with an infinitely large radius of curvature . Torque can be converted to linear force by meshing a rack with a round gear called a pinion : the pinion turns, while the rack moves in a straight line. Such a mechanism is used in the steering of automobiles to convert the rotation of the steering wheel into the left-to-right motion of the tie rod (s) that are attached to the front wheels.
Racks also feature in the theory of gear geometry, where, for instance, the tooth shape of an interchangeable set of gears may be specified for the rack (infinite radius), and the tooth shapes for gears of particular actual radii are then derived from that. The rack and pinion gear type is also used in a rack railway .
In epicyclic gearing, one or more of the gear axes moves. Examples are sun and planet gearing (see below), cycloidal drive , automatic transmissions , and mechanical differentials .
Sun and planet gearing is a method of converting reciprocating motion into rotary motion that was used in steam engines . James Watt used it on his early steam engines to get around the patent on the crank , but it also provided the advantage of increasing the flywheel speed so Watt could use a lighter flywheel.
In the illustration, the sun is yellow, the planet red, the reciprocating arm is blue, the flywheel is green and the driveshaft is gray.
Non-circular gears are designed for special purposes. While a regular gear is optimized to transmit torque to another engaged member with minimum noise and wear and maximum efficiency , a non-circular gear's main objective might be ratio variations, axle displacement oscillations and more. Common applications include textile machines, potentiometers and continuously variable transmissions .
Most gears are ideally rigid bodies which transmit torque and movement through the lever principle and contact forces between the teeth. Namely, the torque applied to one gear causes it to rotate as rigid body, so that its teeth push against those of the matched gear, which in turn rotates as a rigid body transmitting the torque to its axle. Some specialized gear escape this pattern, however.
A harmonic gear or strain wave gear is a specialized gearing mechanism often used in industrial motion control , robotics and aerospace for its advantages over traditional gearing systems, including lack of backlash, compactness and high gear ratios.
Though the diagram does not demonstrate the correct configuration, it is a "timing gear," conventionally with far more teeth than a traditional gear to ensure a higher degree of precision.
In a magnetic gear pair there is no contact between the two members; the torque is instead transmitted through magnetic fields. The cogs of each gear are constant magnets with periodic alternation of opposite magnetic poles on mating surfaces. Gear components are mounted with a backlash capability similar to other mechanical gearings. Although they cannot exert as much force as a traditional gear due to limits on magnetic field strength, such gears work without touching and so are immune to wear, have very low noise, minimal power losses from friction and can slip without damage making them very reliable. [ 52 ] They can be used in configurations that are not possible for gears that must be physically touching and can operate with a non-metallic barrier completely separating the driving force from the load. The magnetic coupling can transmit force into a hermetically sealed enclosure without using a radial shaft seal , which may leak. Magnetic gears are also used in brushless motors along with electromagnets to make the motor spin.
Several other helix parameters can be viewed either in the normal or transverse planes. The subscript n usually indicates the normal.
Subscript w denotes the worm, subscript g denotes the gear.
Pitch is the distance between a point on one tooth and the corresponding point on an adjacent tooth. [ 34 ] It is a dimension measured along a line or curve in the transverse, normal, or axial directions. The use of the single word pitch without qualification may be ambiguous, and for this reason it is preferable to use specific designations such as transverse circular pitch, normal base pitch, axial pitch.
Backlash is the error in motion that occurs when gears change direction. It exists because there is always some gap between the trailing face of the driving tooth and the leading face of the tooth behind it on the driven gear, and that gap must be closed before force can be transferred in the new direction. The term "backlash" can also be used to refer to the size of the gap, not just the phenomenon it causes; thus, one could speak of a pair of gears as having, for example, "0.1 mm of backlash." A pair of gears could be designed to have zero backlash, but this would presuppose perfection in manufacturing, uniform thermal expansion characteristics throughout the system, and no lubricant. Therefore, gear pairs are designed to have some backlash. It is usually provided by reducing the tooth thickness of each gear by half the desired gap distance. In the case of a large gear and a small pinion, however, the backlash is usually taken entirely off the gear and the pinion is given full sized teeth. Backlash can also be provided by moving the gears further apart. The backlash of a gear train equals the sum of the backlash of each pair of gears, so in long trains backlash can become a problem.
For situations that require precision, such as instrumentation and control, backlash can be minimized through one of several techniques. For instance, the gear can be split along a plane perpendicular to the axis, one half fixed to the shaft in the usual manner, the other half placed alongside it, free to rotate about the shaft, but with springs between the two-halves providing relative torque between them, so that one achieves, in effect, a single gear with expanding teeth. Another method involves tapering the teeth in the axial direction and letting the gear slide in the axial direction to take up slack.
Although gears can be made with any pitch, for convenience and interchangeability standard pitches are frequently used. Pitch is a property associated with linear dimensions and so differs whether the standard values are in the imperial (inch) or metric systems. Using inch measurements, standard diametral pitch values with units of "per inch" are chosen; the diametral pitch is the number of teeth on a gear of one inch pitch diameter. Common standard values for spur gears are 3, 4, 5, 6, 8, 10, 12, 16, 20, 24, 32, 48, 64, 72, 80, 96, 100, 120, and 200. [ 55 ] Certain standard pitches such as 1 ⁄ 10 and 1 ⁄ 20 in inch measurements, which mesh with linear rack, are actually (linear) circular pitch values with units of "inches" [ 55 ]
When gear dimensions are in the metric system the pitch specification is generally in terms of module or modulus , which is effectively a length measurement across the pitch diameter . The term module is understood to mean the pitch diameter in millimetres divided by the number of teeth. When the module is based upon inch measurements, it is known as the English module to avoid confusion with the metric module. Module is a direct dimension ("millimeters per tooth"), unlike diametral pitch, which is an inverse dimension ("teeth per inch"). Thus, if the pitch diameter of a gear is 40 mm and the number of teeth 20, the module is 2, which means that there are 2 mm of pitch diameter for each tooth. [ 56 ] The preferred standard module values are 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.8, 1.0, 1.25, 1.5, 2.0, 2.5, 3, 4, 5, 6, 8, 10, 12, 16, 20, 25, 32, 40 and 50. [ 57 ]
Modern physics adopted the gear model in different ways. In the nineteenth century, James Clerk Maxwell developed a model of electromagnetism in which magnetic field lines were rotating tubes of incompressible fluid. Maxwell used a gear wheel and called it an "idle wheel" to explain the electric current as a rotation of particles in opposite directions to that of the rotating field lines. [ 58 ]
More recently, quantum physics uses "quantum gears" in their model. A group of gears can serve as a model for several different systems, such as an artificially constructed nanomechanical device or a group of ring molecules. [ 59 ]
The three wave hypothesis compares the wave–particle duality to a bevel gear. [ 60 ]
The gear mechanism was previously considered exclusively artificial, but as early as 1957, gears had been recognized in the hind legs of various species of planthoppers [ 61 ] and scientists from the University of Cambridge characterized their functional significance in 2013 by doing high-speed photography of the nymphs of Issus coleoptratus at Cambridge University. [ 62 ] [ 63 ] These gears are found only in the nymph forms of all planthoppers, and are lost during the final molt to the adult stage. [ 64 ] In I. coleoptratus , each leg has a 400-micrometer strip of teeth, pitch radius 200 micrometers, with 10 to 12 fully interlocking spur-type gear teeth, including filleted curves at the base of each tooth to reduce the risk of shearing. [ 65 ] The joint rotates like mechanical gears, and synchronizes Issus's hind legs when it jumps to within 30 microseconds, preventing yaw rotation. [ 66 ] [ 67 ] [ 62 ] The gears are not connected all the time. One is located on each of the juvenile insect's hind legs, and when it prepares to jump, the two sets of teeth lock together. As a result, the legs move in almost perfect unison, giving the insect more power as the gears rotate to their stopping point and then unlock. [ 66 ]
Bibliography | https://en.wikipedia.org/wiki/Gear |
A gear pump uses the meshing of gears to pump fluid by displacement. [ 1 ] They are one of the most common types of pumps for hydraulic fluid power applications. The gear pump was invented around 1600 by Johannes Kepler . [ 2 ]
Gear pumps are also widely used in chemical installations to pump high- viscosity fluids. There are two main variations: external gear pumps which use two external spur gears, and internal gear pumps which use an external and an internal spur gear (internal spur gear teeth face inwards, see below). Gear pumps provide positive displacement (or fixed displacement ), meaning they pump a constant amount of fluid for each revolution. Some gear pumps are designed to function as either a motor or a pump.
As the gears rotate they separate on the intake side of the pump, creating a void and suction which is filled by fluid . The fluid is carried by the gears to the discharge side of the pump, where the meshing of the gears displaces the fluid. The mechanical clearances are small— on the order of 10 μm. The tight clearances, along with the speed of rotation, effectively prevent the fluid from leaking backwards.
The rigid design of the gears and houses allow for very high pressures and the ability to pump highly viscous fluids.
Many variations exist, including helical and herringbone gear sets (instead of spur gears), lobe shaped rotors similar to Roots blowers (commonly used as superchargers ), and mechanical designs that allow the stacking of pumps. The most common variations are shown below (the drive gear is shown blue and the idler is shown purple ).
External precision gear pumps are usually limited to maximum working pressures of around 210 bars (21,000 kPa) and maximum rotation speeds around 3,000 RPM. Some manufacturers produce gear pumps with higher working pressures and speeds but these types of pumps tend to be noisy and special precautions may have to be made. [ 3 ]
Suction and pressure ports need to interface where the gears mesh (shown as dim gray lines in the internal pump images). Some internal gear pumps have an additional, crescent-shaped seal (shown above, right). This crescent functions to keep the gears separated and also reduces eddy currents.
Pump formulas:
Gear pumps are generally very efficient, especially in high-pressure applications.
Factors affecting efficiency:
The invention of the gear pump is not uniformly solved. On the one hand, it goes back to Johannes Kepler in 1604; on the other hand, Gottfried Heinrich Graf zu Pappenheim is mentioned, who is said to have constructed the capsule blower with two rotating axes for pumping air and water. Pappenheim should have adopted Kepler’s design without mentioning his name. | https://en.wikipedia.org/wiki/Gear_pump |
Gearspace is a website and forum dedicated to audio engineering . Gearspace is one of the largest resources for pro audio information, with over 1.6 million monthly visitors from 218 countries. [ 1 ] Originally established in 2002 as Gearslutz , the site rebranded in March 2021.
In 2002, Julian Standen and Meg Lee Chin , both musicians and audio engineers , created the site, which is widely regarded as a top online resource for music production knowledge and discussion. [ 1 ] The site has been described as the "best place … for help with your interface, DAW, signal path, or just about anything else." [ 2 ]
In 2018, the website was ranked by Alexa.com as the 7,360th most popular website in the world. [ 3 ] In 2020, it had over 1.6 million monthly visitors from 218 countries. [ 1 ]
In mid-2017, Music Tribe , the parent company of music equipment manufacturer Behringer , pursued legal action against synthesizer manufacturer Dave Smith Instruments (DSI) and a number of the website's forum participants, including a DSI employee, for defamation over various statements made in forum discussions that alleged that Behringer copies other companies' products and exhibits other questionable business practices. [ 4 ]
On January 6, 2021, a forum user started an online petition at Change.org encouraging the website to change its name from Gearslutz. Site co-founder Standen announced later the same month that the site would be undergoing a name change, stating "the word-play pun in the name has gotten old and it is now time to move forward". [ 5 ]
On March 29, 2021, Standen confirmed that the site would be renamed "Gearspace.com". | https://en.wikipedia.org/wiki/Gearspace |
The feet of geckos have a number of specializations. Their surfaces can adhere to any type of material with the exception of Teflon (PTFE). This phenomenon can be explained with three elements:
Geckos are members of the family Gekkonidae . They are reptiles that inhabit temperate and tropical regions. There are over 1,000 different species of geckos. [ 1 ] They can be a variety of colors. Geckos are omnivorous , feeding on a variety of foods, including insects and worms. [ 2 ] Most gecko species, including the crested gecko ( Correlophus ciliatus ), [ 3 ] can climb walls and other surfaces.
The interactions between the gecko's feet and the climbing surface are stronger than simple surface area effects. On its feet, the gecko has many microscopic hairs, or setae (singular seta), arranged into lamellae that increase the Van der Waals forces - the distance-dependent attraction between atoms or molecules - between its feet and the surface. These setae are fibrous structural proteins that protrude from the epidermis , which is made of β-keratin , [ 5 ] similar to α-keratin being the basic building block of human skin and finger nails .
The bottom surface of a gecko's foot will consist of millions of hairy structures called setae. These setae are 5 μm long and are thinner than a human hair. There are thousands of tiny structures called spatula on every seta. Geckos create Van der Waals force by making contact with the surface of materials using their spatulas. More spatulas implies more surface area. The spatulas have sharp edges, which on application of stress in a specific angle, bends and creates more contact with the surface in order to climb on them vertically. Thus, more contact with the surface creates more Van der Waals force to support the whole body of the creature. One seta can hold weights up to 20 mg using Van der Waals force. In total, with help of millions of setae, a gecko can hold about 300 pounds (140 kg). The β-keratin bristles are approximately 5 μm in diameter. The end of each seta consists of approximately 1,000 spatulae that are shaped like an isosceles triangle . The spatulae are approximately 200 nm on one side and 10–30 nm on the other two sides. [ 6 ] The setae are aligned parallel to each other, but not oriented normal to the toes. When the setae contact another surface, their load is supported by both lateral and vertical components. The lateral load component is limited by the peeling of the spatulae and the vertical load component is limited by shear force .
The following equation can be used to quantitatively characterize the Van der Waals forces, by approximating the interaction as being between two flat surfaces:
where F is the force of interaction, A H is the Hamaker constant , and D is the distance between the two surfaces. Gecko setae are much more complicated than a flat surface, for each foot has roughly 14,000 setae that each have about 1,000 spatulae. These surface interactions help to smooth out the surface roughness of the wall, which helps improve the gecko to wall surface interaction.
Many factors affect adhesion , including:
Using the combined dipole–dipole interaction potential between molecules A and B:
where W AB is the potential energy between the molecules (in joules ), C AB is the combined interaction parameter between the molecules (in J m 6 ), and D is the distance between the molecules [in meters]. The potential energy of one molecule at a perpendicular distance D from the planar surface of an infinitely extending material can then be approximated as:
where D′ is the distance between molecule A and an infinitesimal volume of material B, and ρ B is the molecular density of material B (in molecules/m 3 ). This integral can then be written in cylindrical coordinates with x being the perpendicular distance measured from the surface of B to the infinitesimal volume, and r being the parallel distance:
The gecko–wall interaction can be analyzed by approximating the gecko spatula as a long cylinder with radius r s . Then the interaction between a single spatula and a surface is:
where D′ is the distance between the surface of B and an infinitesimal volume of material A and ρ A is the molecular density of material A (in molecules/m 3 ). Using cylindrical coordinates once again, we can find the potential between the gecko spatula and the material B then to be:
where A H is the Hamaker constant for the materials A and B.
The Van der Waals force per spatula, F s can then be calculated by differentiating with respect to D and we obtain:
We can then rearrange this equation to obtain r s as a function of A H :
where a typical interatomic distance of 1.7 Å was used for solids in contact and a F s of 40 μN was used as per a study by Autumn et al . [ 5 ]
The equation for r s can then be used with calculated Hamaker constants [ 8 ] to determine an approximate seta radius. Hamaker constants through both a vacuum and a monolayer of water were used. For those with a monolayer of water, the distance was doubled to account for the water molecules.
These values are similar to the actual radius of the setae on a gecko's foot (approx. 2.5 μm). [ 5 ] [ 9 ]
Research attempts to simulate the gecko's adhesive attribute. Projects that have explored the subject include: | https://en.wikipedia.org/wiki/Gecko_feet |
Geek.com is a technology news weblog about hardware , mobile computing , technology , movies , TV , video games , comic books , and all manner of geek culture subjects. It was founded in 1996 and was run independently until 2007 when it was sold to Name Media, after which it was sold to Geeknet, and then to its current owner, Ziff Davis .
Geek.com was founded in 1996 by Joel Evans and Rob Hughes. Joel's brother, Sam Evans, was soon added as the site's chief editor. [ 1 ] The site was founded as the Ugeek newsletter but soon became a larger online portal with multiple different sections, including JobGeek, GameGeek, PDAGeek, and ChipGeek. [ 2 ] Among the site's many early successes was Ugeek.com's popular Processor Archive. [ 3 ]
In March 2007 Geek.com was sold to NameMedia, a company that specializes in domain name reselling and parking. NameMedia had recently acquired Philip Greenspun 's photo.net and was building out its Enthusiast Media Network, where Geek.com would be the lead technology site. After the acquisition Rob Hughes and Sam Evans left the site, though co-founder Joel Evans stayed on in his role as Chief Geek. Soon afterwards the site's mobile analyst Matthew "palmsolo" Miller left the site and started writing at ZDnet 's Mobile Gadgeteer blog.
In mid-2007 Geek.com underwent a major redesign, moving away from the platform that it had used since 2001, and did away with the subportals, like PDAgeek.
In August 2007 NameMedia acquired XYZcomputing.com a computer hardware website and hired its founder, Sal Cangeloso, to be the site's Senior Editor.
In May 2010 NameMedia sold Geek.com to Geeknet for $1 million. [ 4 ] Cangeloso, who had been promoted to Editor-in-Chief when Joel Evans left at the close of 2009 stayed on board in the same position.
The troubled Geeknet sold Geek.com to Ziff Davis at the beginning of January 2011 [ 5 ] for an undisclosed amount. Once again Cangeloso stayed on, as did longstanding News Editor, Matthew Humphries.
In 2016, Geek.com was significantly retooled under a new staff, consisting of Editor-in-Chief Chris Radtke, Managing Editor Sheilah Villari, and Senior Editor Jordan Minor. Along with a visual redesign, the site expanded its focus to broader geek culture topics like technology, gaming, [ 6 ] movies, TV, and comic books. A new team of freelancers was brought on board to carry out this vision, including YouTube film critic Bob "MovieBob" Chipman.
At the end of 2016, the site hosted a five-hour Facebook "Gifted and Talented Show" made up of sketches and holiday gift suggestions. One notable article, an explanation on the lies surrounding the cartoon Street Sharks , [ 7 ] went viral on sites like Vox , The A.V. Club , and Gawker .
As of December 2023, the site was updated to show a message announcing it's "taking a break at the moment". It also links out to three other Ziff Davis brands as recommended alternatives. | https://en.wikipedia.org/wiki/Geek.com |
The Geiger–Müller tube or G–M tube is the sensing element of the Geiger counter instrument used for the detection of ionizing radiation . It is named after Hans Geiger , who invented the principle in 1908, [ 1 ] and Walther Müller , who collaborated with Geiger in developing the technique further in 1928 to produce a practical tube that could detect a number of different radiation types. [ 2 ] [ 3 ]
It is a gaseous ionization detector and uses the Townsend avalanche phenomenon to produce an easily detectable electronic pulse from as little as a single ionizing event due to a radiation particle. It is used for the detection of gamma radiation, X-rays , and alpha and beta particles. It can also be adapted to detect neutrons . The tube operates in the "Geiger" region of ion pair generation. This is shown on the accompanying plot for gaseous detectors showing ion current against applied voltage.
While it is a robust and inexpensive detector, the G–M is unable to measure high radiation rates efficiently, has a finite life in high radiation areas and cannot measure incident radiation energy , so no spectral information can be generated and there is no discrimination between radiation types; such as between alpha and beta particles. In other words the Geiger-Müller counter provides no information about the energy or the precise timing of the detected radiation, as all ionizing events produce the same output pulse, and the detector has a relatively long dead time after each event. [ 4 ]
A G-M tube consists of a chamber filled with a gas mixture at a low pressure of about 0.1 atmosphere . The chamber contains two electrodes, between which there is a potential difference of several hundred volts . The walls of the tube are either metal or have their inside surface coated with a conducting material or a spiral wire to form the cathode , while the anode is a wire mounted axially in the center of the chamber.
When ionizing radiation strikes the tube, some molecules of the fill gas are ionized directly by the incident radiation, and if the tube cathode is an electrical conductor, such as stainless steel, indirectly by means of secondary electrons produced in the walls of the tube, which migrate into the gas. This creates positively charged ions and free electrons , known as ion pairs , in the gas. The strong electric field created by the voltage across the tube's electrodes accelerates the positive ions towards the cathode and the electrons towards the anode. Close to the anode in the "avalanche region" where the electric field strength rises inversely proportional to radial distance as the anode is approached, free electrons gain sufficient energy to ionize additional gas molecules by collision and create a large number of electron avalanches . These spread along the anode and effectively throughout the avalanche region. This is the "gas multiplication" effect which gives the tube its key characteristic of being able to produce a significant output pulse from a single original ionizing event. [ 6 ]
If there were to be only one avalanche per original ionizing event, then the number of excited molecules would be in the order of 10 6 to 10 8 . However the production of multiple avalanches results in an increased multiplication factor which can produce 10 9 to 10 10 ion pairs. [ 6 ] The creation of multiple avalanches is due to the production of UV photons in the original avalanche, which are not affected by the electric field and move laterally to the axis of the anode to instigate further ionizing events by collision with gas molecules. These collisions produce further avalanches, which in turn produce more photons, and thereby more avalanches in a chain reaction which spreads laterally through the fill gas, and envelops the anode wire. The accompanying diagram shows this graphically. The speed of propagation of the avalanches is typically 2–4 cm per microsecond, so that for common sizes of tubes the complete ionization of the gas around the anode takes just a few microseconds. [ 6 ] This short, intense pulse of current can be measured as a count event in the form of a voltage pulse developed across an external electrical resistor. This can be in the order of volts, thus making further electronic processing simple.
The discharge is terminated by the collective effect of the positive ions created by the avalanches. These ions have lower mobility than the free electrons due to their higher mass and move slowly from the vicinity of the anode wire. This creates a "space charge" which counteracts the electric field that is necessary for continued avalanche generation. For a particular tube geometry and operating voltage this termination always occurs when a certain number of avalanches has been created, therefore the pulses from the tube are always of the same magnitude regardless of the energy of the initiating particle. Consequently, there is no radiation energy information in the pulses [ 6 ] which means the Geiger–Müller tube cannot be used to generate spectral information about the incident radiation. In practice the termination of the avalanche is improved by the use of "quenching" techniques (see later).
Pressure of the fill gas is important in the generation of avalanches. Too low a pressure and the efficiency of interaction with incident radiation is reduced. Too high a pressure, and the “mean free path” for collisions between accelerated electrons and the fill gas is too small, and the electrons cannot gather enough energy between each collision to cause ionization of the gas. The energy gained by electrons is proportional to the ratio “e/p”, where “e” is the electric field strength at that point in the gas, and “p” is the gas pressure. [ 6 ]
Broadly, there are two important types of Geiger tube construction.
For alpha particles, low energy beta particles, and low energy X-rays, the usual form is a cylindrical end-window tube . This type has a window at one end covered in a thin material through which low-penetrating radiation can easily pass. Mica is a commonly used material due to its low mass per unit area. The other end houses the electrical connection to the anode.
The pancake tube is a variant of the end window tube, but which is designed for use for beta and gamma contamination monitoring. It has roughly the same sensitivity to particles as the end window type, but has a flat annular shape so the largest window area can be utilized with a minimum of gas space. Like the cylindrical end window tube, mica is a commonly used window material due to its low mass per unit area. The anode is normally multi-wired in concentric circles so it extends fully throughout the gas space.
This general type is distinct from the dedicated end window type, but has two main sub-types, which use different radiation interaction mechanisms to obtain a count.
Used for gamma radiation detection above energies of about 25 KeV, this type generally has an overall wall thickness of about 1-2 mm of chrome steel . Because most high energy gamma photons will pass through the low density fill gas without interacting, the tube uses the interaction of photons on the molecules of the wall material to produce high energy secondary electrons within the wall. Some of these electrons are produced close enough to the inner wall of the tube to escape into the fill gas. As soon as this happens the electron drifts to the anode and an electron avalanche occurs as though the free electron had been created within the gas. [ 6 ] The avalanche is a secondary effect of a process that starts within the tube wall with the production of electrons that migrate to the inner surface of the tube wall, and then enter the fill gas. This effect is considerably attenuated at low energies below about 20 KeV [ 5 ]
Thin walled tubes are used for:
G–M tubes will not detect neutrons since these do not ionize the gas. However, neutron-sensitive tubes can be produced which either have the inside of the tube coated with boron , or the tube contains boron trifluoride or helium-3 as the fill gas, or the tube is wrapped in about 0.5 mm ( 1 ⁄ 50 in) thick cadmium foil. [ 7 ] The neutrons interact with the boron nuclei, producing alpha particles, or directly with the helium-3 nuclei producing hydrogen and tritium ions and electrons, or with the cadmium, producing gamma rays. These energetic particles interact and produce ions that then trigger the normal avalanche process.
The components of the gas mixture are vital to the operation and application of a G-M tube. The mixture is composed of an inert gas such as helium , argon or neon which is ionized by incident radiation, and a "quench" gas of 5–10% of an organic vapor or a halogen gas to prevent spurious pulsing by quenching the electron avalanches. [ 6 ] This combination of gases is known as a Penning mixture and makes use of the Penning ionization effect.
The modern halogen-filled G–M tube was invented by Sidney H. Liebson in 1947 and has several advantages over the older tubes with organic mixtures. [ 8 ] The halogen tube discharge takes advantage of a metastable state of the inert gas atom to more-readily ionize a halogen molecule than an organic vapor, enabling the tube to operate at much lower voltages, typically 400–600 volts instead of 900–1200 volts. While halogen-quenched tubes have greater plateau voltage slopes compared to organic-quenched tubes (an undesirable quality), they have a vastly longer life than tubes quenched with organic compounds. This is because an organic vapor is gradually destroyed by the discharge process, giving organic-quenched tubes a useful life of around 10 9 events. However, halogen ions can recombine over time, giving halogen-quenched tubes an effectively unlimited lifetime for most uses, although they will still eventually fail at some point due to other ionization-initiated processes that limit the lifetime of all Geiger tubes. For these reasons, the halogen-quenched tube is now the most common. [ 6 ]
Neon is the most common filler gas. Chlorine is the most common quencher, though bromine is occasionally used as well. Halogens are most commonly used with neon, argon or krypton, organic quenchers with helium. [ 9 ]
An example of a gas mixture, used primarily in proportional detectors, is P10 (90% argon, 10% methane).
Another is used in bromine-quenched tubes, typically 0.1% argon, 1-2% bromine, and the balance of neon.
Halogen quenchers are highly chemically reactive and attack the materials of the electrodes, especially at elevated temperatures, leading to tube performance degradation over time. The cathode materials can be chosen from e.g. chromium, platinum, or nickel-copper alloy, [ 10 ] or coated with colloidal graphite, and suitably passivated. Oxygen plasma treatment can provide a passivation layer on stainless steel. Dense non-porous coating with platinum or a tungsten layer or a tungsten foil liner can provide protection here. [ 11 ]
Pure noble gases exhibit threshold voltages increasing with increasing atomic weight. Addition of polyatomic organic quenchers increases threshold voltage, due to dissipation of large percentage of collisions energy in molecular vibrations. Argon with alcohol vapors was one of the most common fills of early tubes. As little as 1 ppm of impurities (argon, mercury, and krypton in neon) can significantly lower the threshold voltage. Admixture of chlorine or bromine provides quenching and stability to low-voltage neon-argon mixtures, with wide temperature range. Lower operating voltages lead to longer rise times of pulses, without appreciably changing the dead times.
Spurious pulses are caused mostly by secondary electrons emitted by the cathode due to positive ion bombardment. The resulting spurious pulses have the nature of a relaxation oscillator and show uniform spacing, dependent on the tube fill gas and overvoltage. At high enough overvoltages, but still below the onset of continuous corona discharges, sequences of thousands of pulses can be produced. Such spurious counts can be suppressed by coating the cathode with higher work function materials, chemical passivation, lacquer coating, etc.
The organic quenchers can decompose to smaller molecules (ethyl alcohol and ethyl acetate) or polymerize into solid deposits (typical for methane). Degradation products of organic molecules may or may not have quenching properties. Larger molecules degrade to more quenching products than small ones; tubes quenched with amyl acetate tend to have ten times higher lifetime than ethanol ones. Tubes quenched with hydrocarbons often fail due to coating of the electrodes with polymerization products, before the gas itself can be depleted; simple gas refill won't help, washing the electrodes to remove the deposits is necessary. Low ionization efficiency is sometimes deliberately sought; mixtures of low pressure hydrogen or helium with organic quenchers are used in some cosmic rays experiments, to detect heavily ionizing muons and electrons.
Argon, krypton and xenon are used to detect soft x-rays, with increasing absorption of low energy photons with decreasing atomic mass, due to direct ionization by photoelectric effect. Above 60-70 keV the direct ionization of the filler gas becomes insignificant, and secondary photoelectrons, Compton electrons or electron-positron pair production by interaction of the gamma photons with the cathode material become the dominant ionization initiation mechanisms. Tube windows can be eliminated by putting the samples directly inside the tube, or, if gaseous, mixing them with the filler gas. Vacuum-tightness requirement can be eliminated by using continuous flow of gas at atmospheric pressure. [ 12 ]
The Geiger plateau is the voltage range in which the G-M tube operates in its correct mode, where ionization occurs along the length of the anode. If a G–M tube is exposed to a steady radiation source and the applied voltage is increased from zero, it follows the plot of current shown in the "Geiger region" where the gradient flattens; this is the Geiger plateau. [ 6 ]
This is shown in more detail in the accompanying Geiger Plateau Curve diagram. If the tube voltage is progressively increased from zero the efficiency of detection will rise until the most energetic radiation starts to produce pulses which can be detected by the electronics. This is the "starting voltage". Increasing the voltage still further results in rapidly rising counts until the "knee" or threshold of the plateau is reached, where the rate of increase of counts falls off. This is where the tube voltage is sufficient to allow a complete discharge along the anode for each detected radiation count, and the effect of different radiation energies are equal. However, the plateau has a slight slope mainly due to the lower electric fields at the ends of the anode because of tube geometry. As the tube voltage is increased, these fields strengthen to produce avalanches. At the end of the plateau the count rate begins to increase rapidly again, until the onset of continuous discharge where the tube cannot detect radiation, and may be damaged. [ 6 ]
Depending on the characteristics of the specific tube (manufacturer, size, gas type, etc.) the voltage range of the plateau will vary. The slope is usually expressed as percentage change of counts per 100 V. To prevent overall efficiency changes due to variation of tube voltage, a regulated voltage supply is used, and it is normal practice to operate in the middle of the plateau to reduce the effect of any voltage variations. [ 6 ] [ 13 ]
The ideal G–M tube should produce a single pulse for every single ionizing event due to radiation. It should not give spurious pulses, and should recover quickly to the passive state, ready for the next radiation event. However, when positive argon ions reach the cathode and become neutral atoms by gaining electrons, the atoms can be elevated to enhanced energy levels. These atoms then return to their ground state by emitting photons which in turn produce further ionization and thereby spurious secondary discharges. If nothing were done to counteract this, ionization would be prolonged and could even escalate. The prolonged avalanche would increase the "dead time" when new events cannot be detected, and could become continuous and damage the tube. Some form of quenching of the ionization is therefore essential to reduce the dead time and protect the tube, and a number of quenching techniques are used.
Self-quenching or internal-quenching tubes stop the discharge without external assistance, originally by means of the addition of a small amount of a polyatomic organic vapor originally such as butane or ethanol, but for modern tubes is a halogen such as bromine or chlorine. [ 6 ]
If a poor gas quencher is introduced to the tube, the positive argon ions, during their motion toward the cathode, would have multiple collisions with the quencher gas molecules and transfer their charge and some energy to them. Thus, neutral argon atoms would be produced and the quencher gas ions in their turn would reach the cathode, gain electrons therefrom, and move into excited states which would decay by photon emission, producing tube discharge. However, effective quencher molecules, when excited, lose their energy not by photon emission, but by dissociation into neutral quencher molecules. No spurious pulses are thus produced. [ 6 ]
Even with chemical quenching, for a short time after a discharge pulse there is a period during which the tube is rendered insensitive and is thus temporarily unable to detect the arrival of any new ionizing particle (the so-called dead time ; typically 50–100 microseconds). This causes a loss of counts at sufficiently high count rates and limits the G–M tube to an effective (accurate) count rate of approximately 10 3 counts per second even with external quenching. While a G-M tube is technically capable of reading higher count rates before it truly saturates, the level of uncertainty involved and the risk of saturation makes it extremely dangerous to rely upon higher count rate readings when attempting to calculate an equivalent radiation dose rate from the count rate. A consequence of this is that ion chamber instruments are usually preferred for higher count rates, however a modern external quenching technique can extend this upper limit considerably. [ 6 ]
External quenching, sometimes called "active quenching" or "electronic quenching", uses simplistic high speed control electronics to rapidly remove and re-apply the high voltage between the electrodes for a fixed time after each discharge peak in order to increase the maximum count rate and lifetime of the tube. Although this can be used instead of a quench gas, it is much more commonly used in conjunction with a quench gas. [ 6 ]
The "time-to-first-count method" is a sophisticated modern implementation of external quenching that allows for dramatically increased maximum count rates via the use of statistical signal processing techniques and much more complex control electronics. Due to uncertainty in the count rate introduced by the simplistic implementation of external quenching, the count rate of a Geiger tube becomes extremely unreliable above approximately 10 3 counts per second. With the time-to-first-count method, effective count rates of 10 5 counts per second are achievable, two orders of magnitude larger than the normal effective limit. The time-to-first-count method is significantly more complicated to implement than traditional external quenching methods, and as a result of this it has not seen widespread use. [ 6 ]
One consequence of the dead time effect is the possibility of a high count rate continually triggering the tube before the recovery time has elapsed. This may produce pulses too small for the counting electronics to detect and lead to the very undesirable situation whereby a G–M counter in a very high radiation field is falsely indicating a low level. This phenomenon is known as "fold-back". An industry rule of thumb is that the discriminator circuit receiving the output from the tube should detect down to 1/10 of the magnitude of a normal pulse to guard against this. [ 5 ] Additionally the circuit should detect when "pulse pile-up " has occurred, where the apparent anode voltage has moved to a new DC level through the combination of high pulse count and noise. The electronic design of Geiger–Müller counters must be able to detect this situation and give an alarm; it is normally done by setting a threshold for excessive tube current.
The efficiency of detection of a G–M tube varies with the type of incident radiation. Tubes with thin end windows have very high efficiencies (can be nearly 100%) for high energy beta, though this drops off as the beta energy decreases due to attenuation by the window material. Alpha particles are also attenuated by the window. As alpha particles have a maximum range of less than 50 mm in air, the detection window should be as close as possible to the source of radiation. The attenuation of the window adds to the attenuation of air, so the window should have a density as low as 1.5 to 2.0 mg/cm 2 to give an acceptable level of detection efficiency. The article on stopping power explains in more detail the ranges for particles types of various energies.
The counting efficiency of photon radiation (gamma and X-rays above 25 keV) depends on the efficiency of radiation interaction in the tube wall, which increases with the atomic number of the wall material. Chromium iron is a commonly used material, which gives an efficiency of about 1% over a wide range of energies. [ 5 ]
If a G–M tube is to be used for gamma or X-ray dosimetry measurements , the energy of incident radiation, which affects the ionizing effect, must be taken into account. However pulses from a G–M tube do not carry any energy information, and attribute equal dose to each count event. Consequently, the count rate response of a "bare" G–M tube to photons at different energy levels is non-linear with the effect of over-reading at low energies. The variation in dose response can be a factor between 5 and 15, according to individual tube construction; the very small tubes having the highest values.
To correct this a technique known as "energy compensation" is applied, which consists of adding a shield of absorbing material around the tube. This filter preferentially absorbs the low energy photons and the dose response is "flattened". The aim is that sensitivity/energy characteristic of the tube should be matched by the absorption/energy characteristic of the filter. This cannot be exactly achieved, but the result is a more uniform response over the stated range of detection energies for the tube. [ 6 ]
Lead and tin are commonly used materials, and a simple filter effective above 150 keV can be made using a continuous collar along the length of the tube. However, at lower energy levels this attenuation can become too great, so air gaps are left in the collar to allow low energy radiation to have a greater effect. In practice, compensation filter design is an empirical compromise to produce an acceptably uniform response, and a number of different materials and geometries are used to obtain the required correction. [ 5 ] | https://en.wikipedia.org/wiki/Geiger–Müller_tube |
In nuclear physics , the Geiger–Nuttall law or Geiger–Nuttall rule relates the decay constant of a radioactive isotope with the energy of the alpha particles emitted. Roughly speaking, it states that short-lived isotopes emit more energetic alpha particles than long-lived ones.
The relationship also shows that half-lives are exponentially dependent on decay energy, so that very large changes in half-life make comparatively small differences in decay energy, and thus alpha particle energy. In practice, this means that alpha particles from all alpha-emitting isotopes across many orders of magnitude of difference in half-life, all nevertheless have about the same decay energy.
Formulated in 1911 by Hans Geiger and John Mitchell Nuttall as a relation between the decay constant and the range of alpha particles in air, [ 1 ] in its modern form [ 2 ] the Geiger–Nuttall law is
where T 1 / 2 {\displaystyle T_{1/2}} is the half-life , E the total kinetic energy (of the alpha particle and the daughter nucleus), and A and B are coefficients that depend on the isotope's atomic number Z .
The law works best for nuclei with even atomic number and even atomic mass. The trend is still there for even-odd, odd-even, and odd-odd nuclei but is not as pronounced.
The Geiger–Nuttall law has even been extended to describe cluster decays , [ 3 ] decays where atomic nuclei larger than helium are released, e.g. silicon and carbon.
A simple way to derive this law is to consider an alpha particle in the atomic nucleus as a particle in a box . The particle is in a bound state because of the presence of the strong interaction potential. It will constantly bounce from one side to the other, and due to the possibility of quantum tunneling by the wave through the potential barrier, each time it bounces, there will be a small likelihood for it to escape.
A knowledge of this quantum mechanical effect enables one to obtain this law, including coefficients, via direct calculation. [ 4 ] This calculation was first performed by physicist George Gamow in 1928. [ 5 ] | https://en.wikipedia.org/wiki/Geiger–Nuttall_law |
Geissolosimine is an antiplasmodial indole alkaloid isolated from the bark of Geissospermum vellosii . [ 1 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Geissolosimine |
A gel is a semi-solid that can have properties ranging from soft and weak to hard and tough. [ 1 ] [ 2 ] Gels are defined as a substantially dilute cross-linked system, which exhibits no flow when in the steady state, although the liquid phase may still diffuse through this system. [ 3 ]
Gels are mostly liquid by mass , yet they behave like solids because of a three-dimensional cross-linked network within the liquid. It is the cross-linking within the fluid that gives a gel its structure (hardness) and contributes to the adhesive stick ( tack ). In this way, gels are a dispersion of molecules of a liquid within a solid medium. The word gel was coined by 19th-century Scottish chemist Thomas Graham by clipping from gelatine . [ 4 ]
The process of forming a gel is called gelation .
Gels consist of a solid three-dimensional network that spans the volume of a liquid medium and ensnares it through surface tension effects. This internal network structure may result from physical bonds such as polymer chain entanglements (see polymers ) (physical gels) or chemical bonds such as disulfide bonds (see thiomers ) (chemical gels), as well as crystallites or other junctions that remain intact within the extending fluid. Virtually any fluid can be used as an extender including water ( hydrogels ), oil, and air ( aerogel ). Both by weight and volume, gels are mostly fluid in composition and thus exhibit densities similar to those of their constituent liquids. Edible jelly is a common example of a hydrogel and has approximately the density of water.
Polyionic polymers are polymers with an ionic functional group. The ionic charges prevent the formation of tightly coiled polymer chains. This allows them to contribute more to viscosity in their stretched state, because the stretched-out polymer takes up more space. This is also the reason gel hardens. See polyelectrolyte for more information.
A colloidal gel consists of a percolated network of particles in a fluid medium, [ 5 ] providing mechanical properties , [ 6 ] in particular the emergence of elastic behaviour. [ 7 ] The particles can show attractive interactions through osmotic depletion or through polymeric links. [ 8 ]
Colloidal gels have three phases in their lifespan: gelation, aging and collapse. [ 9 ] [ 10 ] The gel is initially formed by the assembly of particles into a space-spanning network, leading to a phase arrest. In the aging phase, the particles slowly rearrange to form thicker strands, increasing the elasticity of the material. Gels can also be collapsed and separated by external fields such as gravity. [ 11 ] Colloidal gels show linear response rheology at low amplitudes. [ 12 ] These materials have been explored as candidates for a drug release matrix. [ 13 ]
A hydrogel is a network of polymer chains that are hydrophilic, sometimes found as a colloidal gel in which water is the dispersion medium. A three-dimensional solid results from the hydrophilic polymer chains being held together by cross-links. [ clarification needed ] Because of the inherent cross-links, the structural integrity of the hydrogel network does not dissolve from the high concentration of water. [ 14 ] Hydrogels are highly absorbent (they can contain over 90% water) natural or synthetic polymeric networks.
Hydrogels also possess a degree of flexibility very similar to natural tissue, due to their significant water content. As responsive " smart materials ," hydrogels can encapsulate chemical systems which upon stimulation by external factors such as a change of pH may cause specific compounds such as glucose to be liberated to the environment, in most cases by a gel-sol transition to the liquid state. [ 15 ] Chemomechanical polymers are mostly also hydrogels, which upon stimulation change their volume and can serve as actuators or sensors . The first appearance of the term 'hydrogel' in the literature was in 1894. [ 16 ]
An organogel is a non-crystalline , non-glassy thermoreversible ( thermoplastic ) solid material composed of a liquid organic phase entrapped in a three-dimensionally cross-linked network. The liquid can be, for example, an organic solvent , mineral oil , or vegetable oil . The solubility and particle dimensions of the structurant are important characteristics for the elastic properties and firmness of the organogel. Often, these systems are based on self-assembly of the structurant molecules. [ 17 ] [ 18 ] (An example of formation of an undesired thermoreversible network is the occurrence of wax crystallization in petroleum . [ 19 ] )
Organogels have potential for use in a number of applications, such as in pharmaceuticals , [ 20 ] cosmetics, art conservation, [ 21 ] and food. [ 22 ]
A xerogel / ˈ z ɪər oʊ ˌ dʒ ɛ l / is a solid formed from a gel by drying with unhindered shrinkage. Xerogels usually retain high porosity (15–50%) and enormous surface area (150–900 m 2 /g), along with very small pore size (1–10 nm). When solvent removal occurs under supercritical conditions, the network does not shrink and a highly porous, low-density material known as an aerogel is produced. Heat treatment of a xerogel at elevated temperature produces viscous sintering (shrinkage of the xerogel due to a small amount of viscous flow) which results in a denser and more robust solid, the density and porosity achieved depend on the sintering conditions.
Nanocomposite hydrogels [ 23 ] [ 24 ] or hybrid hydrogels, are highly hydrated polymeric networks, either physically or covalently crosslinked with each other and/or with nanoparticles or nanostructures. [ 25 ] Nanocomposite hydrogels can mimic native tissue properties, structure and microenvironment due to their hydrated and interconnected porous structure. A wide range of nanoparticles, such as carbon-based, polymeric, ceramic, and metallic nanomaterials can be incorporated within the hydrogel structure to obtain nanocomposites with tailored functionality. Nanocomposite hydrogels can be engineered to possess superior physical, chemical, electrical, thermal, and biological properties. [ 23 ] [ 26 ]
Many gels display thixotropy – they become fluid when agitated, but resolidify when resting.
In general, gels are apparently solid, jelly-like materials. It is a type of non-Newtonian fluid .
By replacing the liquid with gas it is possible to prepare aerogels , materials with exceptional properties including very low density, high specific surface areas , and excellent thermal insulation properties.
A gel is in essence the mixture of a polymer network and a solvent phase. Upon stretching, the network crosslinks are moved further apart from each other. Due to the polymer strands between crosslinks acting as entropic springs , gels demonstrate elasticity like rubber (which is just a polymer network, without solvent). This is so because the free energy penalty to stretch an ideal polymer segment N {\displaystyle N} monomers of size b {\displaystyle b} between crosslinks to an end-to-end distance R {\displaystyle R} is approximately given by [ 27 ]
This is the origin of both gel and rubber elasticity . But one key difference is that gel contains an additional solvent phase and hence is capable of having significant volume changes under deformation by taking in and out solvent. For example, a gel could swell to several times its initial volume after being immersed in a solvent after equilibrium is reached. This is the phenomenon of gel swelling. On the contrary, if we take the swollen gel out and allow the solvent to evaporate, the gel would shrink to roughly its original size. This gel volume change can alternatively be introduced by applying external forces. If a uniaxial compressive stress is applied to a gel, some solvent contained in the gel would be squeezed out and the gel shrinks in the applied-stress direction.
To study the gel mechanical state in equilibrium, a good starting point is to consider a cubic gel of volume V 0 {\displaystyle V_{0}} that is stretched by factors λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} and λ 3 {\displaystyle \lambda _{3}} in the three orthogonal directions during swelling after being immersed in a solvent phase of initial volume V s 0 {\displaystyle V_{s0}} . The final deformed volume of gel is then λ 1 λ 2 λ 3 V 0 {\displaystyle \lambda _{1}\lambda _{2}\lambda _{3}V_{0}} and the total volume of the system is V 0 + V s 0 {\displaystyle V_{0}+V_{s0}} , that is assumed constant during the swelling process for simplicity of treatment. The swollen state of the gel is now completely characterized by stretch factors λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} and λ 3 {\displaystyle \lambda _{3}} and hence it is of interest to derive the deformation free energy as a function of them, denoted as f gel ( λ 1 , λ 2 , λ 3 ) {\displaystyle f_{\text{gel}}(\lambda _{1},\lambda _{2},\lambda _{3})} . For analogy to the historical treatment of rubber elasticity and mixing free energy, f gel ( λ 1 , λ 2 , λ 3 ) {\displaystyle f_{\text{gel}}(\lambda _{1},\lambda _{2},\lambda _{3})} is most often defined as the free energy difference after and before the swelling normalized by the initial gel volume V 0 {\displaystyle V_{0}} , that is, a free energy difference density. The form of f gel ( λ 1 , λ 2 , λ 3 ) {\displaystyle f_{\text{gel}}(\lambda _{1},\lambda _{2},\lambda _{3})} naturally assumes two contributions of radically different physical origins, one associated with the elastic deformation of the polymer network, and the other with the mixing of the network with the solvent. Hence, we write [ 28 ]
We now consider the two contributions separately. The polymer elastic deformation term is independent of the solvent phase and has the same expression as a rubber, as derived in the Kuhn's theory of rubber elasticity :
where G 0 {\displaystyle G_{0}} denotes the shear modulus of the initial state. On the other hand, the mixing term f mix ( λ 1 , λ 2 , λ 3 ) {\displaystyle f_{\text{mix}}(\lambda _{1},\lambda _{2},\lambda _{3})} is usually treated by the Flory-Huggins free energy of concentrated polymer solutions f ( ϕ ) {\displaystyle f(\phi )} , where ϕ {\displaystyle \phi } is polymer volume fraction. Suppose the initial gel has a polymer volume fraction of ϕ 0 {\displaystyle \phi _{0}} , the polymer volume fraction after swelling would be ϕ = ϕ 0 / λ 1 λ 2 λ 3 {\displaystyle \phi =\phi _{0}/\lambda _{1}\lambda _{2}\lambda _{3}} since the number of monomers remains the same while the gel volume has increased by a factor of λ 1 λ 2 λ 3 {\displaystyle \lambda _{1}\lambda _{2}\lambda _{3}} . As the polymer volume fraction decreases from ϕ 0 {\displaystyle \phi _{0}} to ϕ {\displaystyle \phi } , a polymer solution of concentration ϕ 0 {\displaystyle \phi _{0}} and volume V 0 {\displaystyle V_{0}} is mixed with a pure solvent of volume ( λ 1 λ 2 λ 3 − 1 ) V 0 {\displaystyle (\lambda _{1}\lambda _{2}\lambda _{3}-1)V_{0}} to become a solution with polymer concentration ϕ {\displaystyle \phi } and volume λ 1 λ 2 λ 3 V 0 {\displaystyle \lambda _{1}\lambda _{2}\lambda _{3}V_{0}} . The free energy density change in this mixing step is given as
where on the right-hand side, the first term is the Flory–Huggins energy density of the final swollen gel, the second is associated with the initial gel and the third is of the pure solvent prior to mixing. Substitution of ϕ = ϕ 0 / λ 1 λ 2 λ 3 {\displaystyle \phi =\phi _{0}/\lambda _{1}\lambda _{2}\lambda _{3}} leads to
Note that the second term is independent of the stretching factors λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} and λ 3 {\displaystyle \lambda _{3}} and hence can be dropped in subsequent analysis. Now we make use of the Flory-Huggins free energy for a polymer-solvent solution that reads [ 29 ]
where v c {\displaystyle v_{c}} is monomer volume, N {\displaystyle N} is polymer strand length and χ {\displaystyle \chi } is the Flory-Huggins energy parameter. Because in a network, the polymer length is effectively infinite, we can take the limit N → ∞ {\displaystyle N\to \infty } and f ( ϕ ) {\displaystyle f(\phi )} reduces to
Substitution of this expression into f mix ( λ 1 , λ 2 , λ 3 ) {\displaystyle f_{\text{mix}}(\lambda _{1},\lambda _{2},\lambda _{3})} and addition of the network contribution leads to [ 28 ]
This provides the starting point to examining the swelling equilibrium of a gel network immersed in solvent. It can be shown that gel swelling is the competition between two forces, one is the osmotic pressure of the polymer solution that favors the take in of solvent and expansion, the other is the restoring force of the polymer network elasticity that favors shrinkage. At equilibrium, the two effects exactly cancel each other in principle and the associated λ 1 {\displaystyle \lambda _{1}} , λ 2 {\displaystyle \lambda _{2}} and λ 3 {\displaystyle \lambda _{3}} define the equilibrium gel volume. In solving the force balance equation, graphical solutions are often preferred.
In an alternative, scaling approach, suppose an isotropic gel is stretch by a factor of λ {\displaystyle \lambda } in all three directions. Under the affine network approximation, the mean-square end-to-end distance in the gel increases from initial R 0 2 {\displaystyle R_{0}^{2}} to ( λ R 0 ) 2 {\displaystyle (\lambda R_{0})^{2}} and the elastic energy of one stand can be written as
where R ref {\displaystyle R_{\text{ref}}} is the mean-square fluctuation in end-to-end distance of one strand. The modulus of the gel is then this single-strand elastic energy multiplied by strand number density ν = ϕ / N b 3 {\displaystyle \nu =\phi /Nb^{3}} to give [ 27 ]
This modulus can then be equated to osmotic pressure (through differentiation of the free energy) to give the same equation as we found above.
Consider a hydrogel made of polyelectrolytes decorated with weak acid groups that can ionize according to the reaction
is immersed in a salt solution of physiological concentration. The degree of ionization of the polyelectrolytes is then controlled by the pH {\displaystyle {\text{pH}}} and due to the charged nature of H + {\displaystyle {\text{H}}^{+}} and A − {\displaystyle {\text{A}}^{-}} , electrostatic interactions with other ions in the systems. This is effectively a reacting system governed by acid-base equilibrium modulated by electrostatic effects, and is relevant in drug delivery , sea water desalination and dialysis technologies. Due to the elastic nature of the gel, the dispersion of A − {\displaystyle {\text{A}}^{-}} in the system is constrained and hence, there will be a partitioning of salts ions and H + {\displaystyle {\text{H}}^{+}} inside and outside the gel, which is intimately coupled to the polyelectrolyte degree of ionization. This ion partitioning inside and outside the gel is analogous to the partitioning of ions across a semipemerable membrane in classical Donnan theory, but a membrane is not needed here because the gel volume constraint imposed by network elasticity effectively acts its role, in preventing the macroions to pass through the fictitious membrane while allowing ions to pass. [ 30 ]
The coupling between the ion partitioning and polyelectrolyte ionization degree is only partially by the classical Donnan theory. As a starting point we can neglect the electrostatic interactions among ions. Then at equilibrium, some of the weak acid sites in the gel would dissociate to form A − {\displaystyle {\text{A}}^{-}} that electrostatically attracts positive charged H + {\displaystyle {\text{H}}^{+}} and salt cations leading to a relatively high concentration of H + {\displaystyle {\text{H}}^{+}} and salt cations inside the gel. But because the concentration of H + {\displaystyle {\text{H}}^{+}} is locally higher, it suppresses the further ionization of the acid sites. This phenomenon is the prediction of the classical Donnan theory. [ 31 ] However, with electrostatic interactions, there are further complications to the picture. Consider the case of two adjacent, initially uncharged acid sites HA {\displaystyle {\text{HA}}} are both dissociated to form A − {\displaystyle {\text{A}}^{-}} . Since the two sites are both negatively charged, there will be a charge-charge repulsion along the backbone of the polymer than tends to stretch the chain. This energy cost is high both elastically and electrostatically and hence suppress ionization. Even though this ionization suppression is qualitatively similar to that of Donnan prediction, it is absent without electrostatic consideration and present irrespective of ion partitioning. The combination of both effects as well as gel elasticity determines the volume of the gel at equilibrium. [ 30 ] Due to the complexity of the coupled acid-base equilibrium, electrostatics and network elasticity, only recently has such system been correctly recreated in computer simulations . [ 30 ] [ 32 ]
Some species secrete gels that are effective in parasite control. For example, the long-finned pilot whale secretes an enzymatic gel that rests on the outer surface of this animal and helps prevent other organisms from establishing colonies on the surface of these whales' bodies. [ 33 ]
Hydrogels existing naturally in the body include mucus , the vitreous humor of the eye, cartilage , tendons and blood clots . Their viscoelastic nature results in the soft tissue component of the body, disparate from the mineral-based hard tissue of the skeletal system. Researchers are actively developing synthetically derived tissue replacement technologies derived from hydrogels, for both temporary implants (degradable) and permanent implants (non-degradable). A review article on the subject discusses the use of hydrogels for nucleus pulposus replacement, cartilage replacement, and synthetic tissue models. [ 34 ]
Many substances can form gels when a suitable thickener or gelling agent is added to their formula. This approach is common in the manufacture of a wide range of products, from foods to paints and adhesives.
In fiber optic communications, a soft gel resembling hair gel in viscosity is used to fill the plastic tubes containing the fibers. The main purpose of the gel is to prevent water intrusion if the buffer tube is breached, but the gel also buffers the fibers against mechanical damage when the tube is bent around corners during installation, or flexed. Additionally, the gel acts as a processing aid when the cable is being constructed, keeping the fibers central whilst the tube material is extruded around it. | https://en.wikipedia.org/wiki/Gel |
A gel doc , also known as a gel documentation system , gel image system or gel imager , refers to equipment widely used in molecular biology laboratories for the imaging and documentation of nucleic acid and protein suspended within polyacrylamide or agarose gels. [ 1 ] [ 2 ] Genetic information is stored in DNA. Polyacrylamide or agarose gel electrophoresis procedures are carried out to examine nucleic acids or proteins in order to analyze the genetic data. [ 3 ] For protein analysis, two-dimensional gel electrophoresis is employed (2-DGE) which is one of the methods most frequently used in comparative proteomic investigations that can distinguish thousands of proteins in a single run. Proteins are separated using 2-DGE first, based on their isoelectric points (pIs) in one dimension and then based on their molecular mass in the other. After that, a thorough qualitative and quantitative analysis of the proteomes is performed using gel documentation with software image assessment methods on the 2-DGE gels stained for protein visibility. [ 4 ] Gels are typically stained with Ethidium bromide [ 5 ] or other nucleic acid stains such as GelGreen .
Generally, a gel doc includes an ultraviolet (UV) light transilluminator , a hood or a darkroom to shield external light sources and protect the user from UV exposure, a computer, software and a high-performance CCD camera for image capturing. Regarding the optical sensor utilized in commercial gel-document systems, the image quality increases with image sensor size. With advancement in CMOS camera sensors like Sony's Pregius and Exmor series , low light capable cameras made of these sensors are also being incorporated in gel documentation systems. The dynamic range of the imaging device is a significant barrier to detecting the complete concentration range of cellular proteins in 2DE gels. Dense protein regions are extremely luminous and require just brief exposures in fluorescence imagers with full-field illumination and CCD cameras. Longer exposures are needed for protein sites with low density. High-abundance proteins are frequently found beside low-abundance proteins on 2DE gels. Because of the fluorescent signals produced by high-abundance proteins, long exposure durations needed to detect low-abundance proteins frequently result in pixel saturation. Avoiding this detector saturation limit is crucial for getting high dynamic range gel images since the measurement of protein regions relies on correct intensity values for all pixels inside a gel image. [ 6 ]
The main manufacturers of gel documentation systems are MaestroGen, Cytiva, Bio Rad , Azure Biosystems, Bioolympics, Syngene, Vilber Lourmat, UVItec, UVP, Biozen, Imagene and Aplegen. Recently affordable systems from Chinese manufacturers like Clinx and Indian manufacturers like iGene Labserve , Biozen Labs have entered the market.
For certain extremely low light applications like chemiluminescence (CL), gel documentation systems are also being designed with cooled cameras that enable longer exposures without the sensor heating up. These ChemiDoc technology systems are broadly used to detect wide range of analytes with high- throughput screening due to its sensitivity, efficiency, low noise. [ 7 ] Verifying the loading, quality, and separation can be detect on the ChemiDoc MP (Bio-Rad) camera system. [ 8 ] In the stain-free gel imaging procedure, tryptophan residues undergo a UV-induced interaction with trihalo chemicals in the stain-free gel to produce a fluorescence signal. Utilizing the Bio-Rad ChemiDoc MP Imaging System, activate the gel by UV transillumination for 1 min. Using the stain-free gel setting and automatically optimized exposure duration setting, photographing the gel can be done. Manually shorten the exposure duration if the gel has been overexposed. [ 9 ] It produces images of faint bands and spots in gels and blots that would otherwise not have been visible to the naked eye. The resulting images show wide, glowing regions for proteins with high abundance, and small, dim spots for proteins with low abundance.
Models also include features to handle a variety of fluorescence and chemiluminescence with cameras cooled to -28 to -60 °C. Other advanced features include instant printing on-board the camera and Wi-Fi connectivity for control by smartphone and tablet devices. | https://en.wikipedia.org/wiki/Gel_doc |
Gel electrophoresis is an electrophoresis method for separation and analysis of biomacromolecules ( DNA , RNA , proteins , etc.) and their fragments, based on their size and charge through a gel . It is used in clinical chemistry to separate proteins by charge or size (IEF agarose, essentially size independent) and in biochemistry and molecular biology to separate a mixed population of DNA and RNA fragments by length, to estimate the size of DNA and RNA fragments, or to separate proteins by charge. [ 1 ]
Nucleic acid molecules are separated by applying an electric field to move the negatively charged molecules through a gel matrix of agarose , polyacrylamide , or other substances. Shorter molecules move faster and migrate farther than longer ones because shorter molecules migrate more easily through the pores of the gel. This phenomenon is called sieving. [ 2 ] Proteins are separated by the charge in agarose because the pores of the gel are too large to sieve proteins. Gel electrophoresis can also be used for the separation of nanoparticles .
Gel electrophoresis uses a gel as an anticonvective medium or sieving medium during electrophoresis. Gels suppress the thermal convection caused by the application of the electric field and can also serve to maintain the finished separation so that a post-electrophoresis stain can be applied. [ 3 ] DNA gel electrophoresis is usually performed for analytical purposes, often after amplification of DNA via polymerase chain reaction (PCR), but may be used as a preparative technique for other methods such as mass spectrometry , RFLP , PCR, cloning , DNA sequencing , or southern blotting for further characterization.
Electrophoresis is a process that enables the sorting of molecules based on charge, size, or shape. Using an electric field, molecules such as DNA can be made to move through a gel made of agarose or polyacrylamide . The electric field consists of a negative charge at one end which pushes the molecules through the gel and a positive charge at the other end that pulls the molecules through the gel. The molecules being sorted are dispensed into a well in the gel material. The gel is placed in an electrophoresis chamber, which is then connected to a power source. When the electric field is applied, the larger molecules move more slowly through the gel while the smaller molecules move faster. The different sized molecules form distinct bands on the gel. [ 4 ]
The term " gel " in this instance refers to the matrix used to contain, then separate the target molecules. In most cases, the gel is a crosslinked polymer whose composition and porosity are chosen based on the specific weight and composition of the target to be analyzed. When separating proteins or small nucleic acids ( DNA , RNA , or oligonucleotides ), the gel is usually composed of different concentrations of acrylamide and a cross-linker , producing different sized mesh networks of polyacrylamide. When separating larger nucleic acids (greater than a few hundred bases ), the preferred matrix is purified agarose. In both cases, the gel forms a solid yet porous matrix. Acrylamide, in contrast to polyacrylamide, is a neurotoxin and must be handled using appropriate safety precautions to avoid poisoning. Agarose is composed of long unbranched chains of uncharged carbohydrates without cross-links, resulting in a gel with large pores allowing for the separation of macromolecules and macromolecular complexes . [ 5 ]
Electrophoresis refers to the electromotive force (EMF) that is used to move the molecules through the gel matrix. By placing the molecules in wells in the gel and applying an electric field, the molecules will move through the matrix at different rates, determined largely by their mass when the charge-to-mass ratio (Z) of all species is uniform. However, when charges are not uniform, the electrical field generated by the electrophoresis procedure will cause the molecules to migrate differentially according to charge. Species that are net positively charged will migrate towards the cathode (which is negatively charged because this is an electrolytic rather than galvanic cell ), whereas species that are net negatively charged will migrate towards the positively charged anode. Mass remains a factor in the speed with which these non-uniformly charged molecules migrate through the matrix toward their respective electrodes. [ 6 ]
If several samples have been loaded into adjacent wells in the gel, they will run parallel in individual lanes. Depending on the number of different molecules, each lane shows the separation of the components from the original mixture as one or more distinct bands, one band per component. Incomplete separation of the components can lead to overlapping bands, or indistinguishable smears representing multiple unresolved components. [ citation needed ] Bands in different lanes that end up at the same distance from the top contain molecules that passed through the gel at the same speed, which usually means they are approximately the same size. There are molecular weight size markers available that contain a mixture of molecules of known sizes. If such a marker was run on one lane in the gel parallel to the unknown samples, the bands observed can be compared to those of the unknown to determine their size. The distance a band travels is approximately inversely proportional to the logarithm of the size of the molecule. (Equivalently, the distance traveled is inversely proportional to the log of the samples's molecular weight). [ 7 ]
There are limits to electrophoretic techniques. Since passing a current through a gel causes heating, gels may melt during electrophoresis. Electrophoresis is performed in buffer solutions to reduce pH changes due to the electric field, which is important because the charge of DNA and RNA depends on pH, but running for too long can exhaust the buffering capacity of the solution. There are also limitations in determining the molecular weight by SDS-PAGE, especially when trying to find the MW of an unknown protein. Certain biological variables are difficult or impossible to minimize and can affect electrophoretic migration. Such factors include protein structure, post-translational modifications, and amino acid composition. For example, tropomyosin is an acidic protein that migrates abnormally on SDS-PAGE gels. This is because the acidic residues are repelled by the negatively charged SDS, leading to an inaccurate mass-to-charge ratio and migration. [ 8 ] Further, different preparations of genetic material may not migrate consistently with each other, for morphological or other reasons.
The types of gel most typically used are agarose and polyacrylamide gels. Each type of gel is well-suited to different types and sizes of the analyte. Polyacrylamide gels are usually used for proteins and have very high resolving power for small fragments of DNA (5-500 bp). Agarose gels, on the other hand, have lower resolving power for DNA but a greater range of separation, and are therefore usually used for DNA fragments of 50–20,000 bp in size. (The resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE).) [ 9 ] Polyacrylamide gels are run in a vertical configuration while agarose gels are typically run horizontally in a submarine mode. They also differ in their casting methodology, as agarose sets thermally, while polyacrylamide forms in a chemical polymerization reaction.
Agarose gels are made from the natural polysaccharide polymers extracted from seaweed .
Agarose gels are easily cast and handled compared to other matrices because the gel setting is a physical rather than chemical change. Samples are also easily recovered. After the experiment is finished, the resulting gel can be stored in a plastic bag in a refrigerator.
Agarose gels do not have a uniform pore size, but are optimal for electrophoresis of proteins that are larger than 200 kDa. [ 10 ] Agarose gel electrophoresis can also be used for the separation of DNA fragments ranging from 50 base pair to several megabases (millions of bases), [ 11 ] the largest of which require specialized apparatus. The distance between DNA bands of different lengths is influenced by the percent agarose in the gel, with higher percentages requiring longer run times, sometimes days. Instead high percentage agarose gels should be run with a pulsed field electrophoresis (PFE), or field inversion electrophoresis .
"Most agarose gels are made with between 0.7% (good separation or resolution of large 5–10kb DNA fragments) and 2% (good resolution for small 0.2–1kb fragments) agarose dissolved in electrophoresis buffer. Up to 3% can be used for separating very tiny fragments but a vertical polyacrylamide gel is more appropriate in this case. Low percentage gels are very weak and may break when you try to lift them. High percentage gels are often brittle and do not set evenly. 1% gels are common for many applications." [ 12 ]
Polyacrylamide gel electrophoresis (PAGE) is used for separating proteins ranging in size from 5 to 2,000 kDa due to the uniform pore size provided by the polyacrylamide gel. Pore size is controlled by modulating the concentrations of acrylamide and bis-acrylamide powder and by the polymerization time used in creating a gel. Care must be used when creating this type of gel, as acrylamide is a potent neurotoxin in its liquid and powdered forms.
Traditional DNA sequencing techniques such as Maxam-Gilbert or Sanger methods used polyacrylamide gels to separate DNA fragments differing by a single base-pair in length so the sequence could be read. Most modern DNA separation methods now use agarose gels, except for particularly small DNA fragments. It is currently most often used in the field of immunology and protein analysis, often used to separate different proteins or isoforms of the same protein into separate bands. These can be transferred onto a nitrocellulose or PVDF membrane to be probed with antibodies and corresponding markers, such as in a western blot .
Typically resolving gels are made in 6%, 8%, 10%, 12% or 15%. Stacking gel (5%) is poured on top of the resolving gel and a gel comb (which forms the wells and defines the lanes where proteins, sample buffer, and ladders will be placed) is inserted. The percentage chosen depends on the size of the protein that one wishes to identify or probe in the sample. The smaller the known weight, the higher the percentage that should be used. Changes in the buffer system of the gel can help to further resolve proteins of very small sizes. [ 13 ]
Partially hydrolysed potato starch makes for another non-toxic medium for protein electrophoresis. The gels are slightly more opaque than acrylamide or agarose. Non-denatured proteins can be separated according to charge and size. They are visualised using Napthal Black or Amido Black staining. Typical starch gel concentrations are 5% to 10%. [ 14 ] [ 15 ] [ 16 ]
Denaturing gels are run under conditions that disrupt the natural structure of the analyte, causing it to unfold into a linear chain. Thus, the mobility of each macromolecule depends only on its linear length and its mass-to-charge ratio. Thus, the secondary, tertiary, and quaternary levels of biomolecular structure are disrupted, leaving only the primary structure to be analyzed.
Nucleic acids are often denatured by including urea in the buffer, while proteins are denatured using sodium dodecyl sulfate , usually as part of the SDS-PAGE process. For full denaturation of proteins, it is also necessary to reduce the covalent disulfide bonds that stabilize their tertiary and quaternary structure , a method called reducing PAGE. Reducing conditions are usually maintained by the addition of beta-mercaptoethanol or dithiothreitol . For a general analysis of protein samples, reducing PAGE is the most common form of protein electrophoresis .
Denaturing conditions are necessary for proper estimation of molecular weight of RNA. RNA is able to form more intramolecular interactions than DNA which may result in change of its electrophoretic mobility . Urea , DMSO and glyoxal are the most often used denaturing agents to disrupt RNA structure. Originally, highly toxic methylmercury hydroxide was often used in denaturing RNA electrophoresis, [ 17 ] but it may be method of choice for some samples. [ 18 ]
Denaturing gel electrophoresis is used in the DNA and RNA banding pattern-based methods temperature gradient gel electrophoresis (TGGE) [ 19 ] and denaturing gradient gel electrophoresis (DGGE). [ 20 ]
Native gels are run in non-denaturing conditions so that the analyte's natural structure is maintained. This allows the physical size of the folded or assembled complex to affect the mobility, allowing for analysis of all four levels of the biomolecular structure. For biological samples, detergents are used only to the extent that they are necessary to lyse lipid membranes in the cell . Complexes remain—for the most part—associated and folded as they would be in the cell. One downside, however, is that complexes may not separate cleanly or predictably, as it is difficult to predict how the molecule's shape and size will affect its mobility. These effects have been addressed by preparative native PAGE .
Unlike denaturing methods, native gel electrophoresis does not use a charged denaturing agent. The molecules being separated (usually proteins or nucleic acids ) therefore differ not only in molecular mass and intrinsic charge, but also the cross-sectional area, and thus experience different electrophoretic forces dependent on the shape of the overall structure. For proteins, since they remain in the native state they may be visualized not only by general protein staining reagents but also by specific enzyme-linked staining.
A specific experiment example of an application of native gel electrophoresis is to check for enzymatic activity to verify the presence of the enzyme in the sample during protein purification. For example, for the protein alkaline phosphatase, the staining solution is a mixture of 4-chloro-2-2methylbenzenediazonium salt with 3-phospho-2-naphthoic acid-2'-4'-dimethyl aniline in Tris buffer. This stain is commercially sold as a kit for staining gels. If the protein is present, the mechanism of the reaction takes place in the following order: it starts with the de-phosphorylation of 3-phospho-2-naphthoic acid-2'-4'-dimethyl aniline by alkaline phosphatase (water is needed for the reaction). The phosphate group is released and replaced by an alcohol group from water. The electrophile 4- chloro-2-2 methylbenzenediazonium (Fast Red TR Diazonium salt) displaces the alcohol group forming the final product Red Azo dye. As its name implies, this is the final visible-red product of the reaction. In undergraduate academic experimentation of protein purification, the gel is usually run next to commercial purified samples to visualize the results and conclude whether or not purification was successful. [ 22 ]
Native gel electrophoresis is typically used in proteomics and metallomics . However, native PAGE is also used to scan genes (DNA) for unknown mutations as in single-strand conformation polymorphism .
Buffers in gel electrophoresis are used to provide ions that carry a current and to maintain the pH at a relatively constant value.
These buffers have plenty of ions in them, which is necessary for the passage of electricity through them. Something like distilled water or benzene contains few ions, which is not ideal for the use in electrophoresis. [ 23 ] There are a number of buffers used for electrophoresis. The most common being, for nucleic acids Tris/Acetate/EDTA (TAE), Tris/Borate/EDTA (TBE). Many other buffers have been proposed, e.g. lithium borate (LB) , (which is rarely used based on Pubmed citations), isoelectric histidine, pK matched Good's buffers, etc.; in most cases the purported rationale is lower current (less heat) matched ion mobilities, which leads to longer buffer life. Borate is problematic as borate can polymerize or interact with cis diols such as those found in RNA. TAE has the lowest buffering capacity, but provides the best resolution for larger DNA. This means a lower voltage and more time, but a better product. LB is relatively new and is ineffective in resolving fragments larger than 5 kbp; However, with its low conductivity, a much higher voltage could be used (up to 35 V/cm), which means a shorter analysis time for routine electrophoresis. As low as one base pair size difference could be resolved in 3% agarose gel with an extremely low conductivity medium (1 mM Lithium borate). [ 24 ]
Most SDS-PAGE protein separations are performed using a "discontinuous" (or DISC) buffer system that significantly enhances the sharpness of the bands within the gel. During electrophoresis in a discontinuous gel system, an ion gradient is formed in the early stage of electrophoresis that causes all of the proteins to focus on a single sharp band in a process called isotachophoresis . Separation of the proteins by size is achieved in the lower, "resolving" region of the gel. The resolving gel typically has a much smaller pore size, which leads to a sieving effect that now determines the electrophoretic mobility of the proteins.
After the electrophoresis is complete, the molecules in the gel can be stained to make them visible. DNA may be visualized using ethidium bromide which, when intercalated into DNA, fluoresce under ultraviolet light, while protein may be visualised using silver stain or Coomassie brilliant blue dye. Other methods may also be used to visualize the separation of the mixture's components on the gel. If the molecules to be separated contain radioactivity , for example in a DNA sequencing gel, an autoradiogram can be recorded of the gel. Photographs can be taken of gels, often using a gel doc system. Gels are then commonly labelled for presentation and scientific records on the popular figure-creation website, SciUGo .
After separation, an additional separation method may then be used, such as isoelectric focusing or SDS-PAGE . The gel will then be physically cut, and the protein complexes extracted from each portion separately. Each extract may then be analysed, such as by peptide mass fingerprinting or de novo peptide sequencing after in-gel digestion . This can provide a great deal of information about the identities of the proteins in a complex.
Gel electrophoresis is used in forensics , molecular biology , genetics , microbiology and biochemistry . The results can be analyzed quantitatively by visualizing the gel with UV light and a gel imaging device. The image is recorded with a computer-operated camera, and the intensity of the band or spot of interest is measured and compared against standard or markers loaded on the same gel. The measurement and analysis are mostly done with specialized software.
Depending on the type of analysis being performed, other techniques are often implemented in conjunction with the results of gel electrophoresis, providing a wide range of field-specific applications.
In the case of nucleic acids, the direction of migration, from negative to positive electrodes, is due to the naturally occurring negative charge carried by their sugar - phosphate backbone. [ 25 ]
Double-stranded DNA fragments naturally behave as long rods, so their migration through the gel is relative to their size or, for cyclic fragments, their radius of gyration . Circular DNA such as plasmids , however, may show multiple bands, the speed of migration may depend on whether it is relaxed or supercoiled. Single-stranded DNA or RNA tends to fold up into molecules with complex shapes and migrate through the gel in a complicated manner based on their tertiary structure. Therefore, agents that disrupt the hydrogen bonds , such as sodium hydroxide or formamide , are used to denature the nucleic acids and cause them to behave as long rods again. [ 26 ]
Gel electrophoresis of large DNA or RNA is usually done by agarose gel electrophoresis. See the " chain termination method " page for an example of a polyacrylamide DNA sequencing gel. Characterization through ligand interaction of nucleic acids or fragments may be performed by mobility shift affinity electrophoresis .
Electrophoresis of RNA samples can be used to check for genomic DNA contamination and also for RNA degradation. RNA from eukaryotic organisms shows distinct bands of 28s and 18s rRNA, the 28s band being approximately twice as intense as the 18s band. Degraded RNA has less sharply defined bands, has a smeared appearance, and the intensity ratio is less than 2:1.
Proteins , unlike nucleic acids, can have varying charges and complex shapes, therefore they may not migrate into the polyacrylamide gel at similar rates, or all when placing a negative to positive EMF on the sample. Proteins, therefore, are usually denatured in the presence of a detergent such as sodium dodecyl sulfate (SDS) that coats the proteins with a negative charge. [ 3 ] Generally, the amount of SDS bound is relative to the size of the protein (usually 1.4g SDS per gram of protein), so that the resulting denatured proteins have an overall negative charge, and all the proteins have a similar charge-to-mass ratio. Since denatured proteins act like long rods instead of having a complex tertiary shape, the rate at which the resulting SDS coated proteins migrate in the gel is relative only to their size and not their charge or shape. [ 3 ]
Proteins are usually analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis ( SDS-PAGE ), by native gel electrophoresis , by preparative native gel electrophoresis ( QPNC-PAGE ), or by 2-D electrophoresis .
Characterization through ligand interaction may be performed by electroblotting or by affinity electrophoresis in agarose or by capillary electrophoresis as for estimation of binding constants and determination of structural features like glycan content through lectin binding.
A novel application for gel electrophoresis is the separation or characterization of metal or metal oxide nanoparticles (e.g. Au, Ag, ZnO, SiO2) regarding the size, shape, or surface chemistry of the nanoparticles. [ 27 ] The scope is to obtain a more homogeneous sample (e.g. narrower particle size distribution), which then can be used in further products/processes (e.g. self-assembly processes). For the separation of nanoparticles within a gel, the key parameter is the ratio of the particle size to the mesh size, whereby two migration mechanisms were identified: the unrestricted mechanism, where the particle size << mesh size, and the restricted mechanism, where particle size is similar to mesh size. [ 28 ]
A 1959 book on electrophoresis by Milan Bier cites references from the 1800s. [ 33 ] However, Oliver Smithies made significant contributions. Bier states: "The method of Smithies ... is finding wide application because of its unique separatory power." Taken in context, Bier clearly implies that Smithies' method is an improvement. | https://en.wikipedia.org/wiki/Gel_electrophoresis |
Gel electrophoresis of nucleic acids is an analytical technique to separate DNA or RNA fragments by size and reactivity. Nucleic acid molecules are placed on a gel , where an electric field induces the nucleic acids (which are negatively charged due to their sugar- phosphate backbone) to migrate toward the positively charged anode . The molecules separate as they travel through the gel based on the each molecule's size and shape. Longer molecules move more slowly because the gel resists their movement more forcefully than it resists shorter molecules. After some time, the electricity is turned off and the positions of the different molecules are analyzed.
The nucleic acid to be separated can be prepared in several ways before separation by electrophoresis. In the case of large DNA molecules, the DNA is frequently cut into smaller fragments using a DNA restriction endonuclease (or restriction enzyme). In other instances, such as PCR amplified samples, enzymes present in the sample that might affect the separation of the molecules are removed through various means before analysis. Once the nucleic acid is properly prepared, the samples of the nucleic acid solution are placed in the wells of the gel and a voltage is applied across the gel for a specified amount of time.
The DNA fragments of different lengths are visualized using a fluorescent dye specific for DNA, such as ethidium bromide . The gel shows bands corresponding to different nucleic acid molecules populations with different molecular weight. Fragment size is usually reported in "nucleotides", "base pairs" or "kb" (for thousands of base pairs) depending upon whether single- or double-stranded nucleic acid has been separated. Fragment size determination is typically done by comparison to commercially available DNA markers containing linear DNA fragments of known length.
The types of gel most commonly used for nucleic acid electrophoresis are agarose (for relatively long DNA molecules) and polyacrylamide (for high resolution of short DNA molecules, for example in DNA sequencing ). Gels have conventionally been run in a "slab" format such as that shown in the figure, but capillary electrophoresis has become important for applications such as high-throughput DNA sequencing. Electrophoresis techniques used in the assessment of DNA damage include alkaline gel electrophoresis and pulsed field gel electrophoresis .
For short DNA segments such as 20 to 60 bp double stranded DNA, running them in polyacrylamide gel (PAGE) will give better resolution (native condition). [ 1 ] Similarly, RNA and single-stranded DNA can be run and visualised by PAGE gels containing denaturing agents such as urea. PAGE gels are widely used in techniques such as DNA foot printing, EMSA and other DNA-protein interaction techniques.
The measurement and analysis are mostly done with a specialized gel analysis software. Capillary electrophoresis results are typically displayed in a trace view called an electropherogram .
A number of factors can affect the migration of nucleic acids: the dimension of the gel pores, the voltage used, the ionic strength of the buffer, and the concentration intercalating dye such as ethidium bromide if used during electrophoresis. [ 2 ]
The gel sieves the DNA by the size of the DNA molecule whereby smaller molecules travel faster. Double-stranded DNA moves at a rate that is approximately inversely proportional to the logarithm of the number of base pairs. This relationship however breaks down with very large DNA fragments and it is not possible to separate them using standard agarose gel electrophoresis . The limit of resolution depends on gel composition and field strength. [ 3 ] and the mobility of larger circular DNA may be more strongly affected than linear DNA by the pore size of the gel. [ 4 ] Separation of very large DNA fragments requires pulse field gel electrophoresis (PFGE). In field inversion gel electrophoresis (FIGE, a kind of PFGE), it is possible to have "band inversion" - where large molecules may move faster than small molecules.
The conformation of the DNA molecule can significantly affect the movement of the DNA, for example, supercoiled DNA usually moves faster than relaxed DNA because it is tightly coiled and hence more compact. In a normal plasmid DNA preparation, multiple forms of DNA may be present, [ 5 ] and gel from the electrophoresis of the plasmids would normally show a main band which would be the negatively supercoiled form, while other forms of DNA may appear as minor fainter bands. These minor bands may be nicked DNA (open circular form) and the relaxed closed circular form which normally run slower than supercoiled DNA , and the single-stranded form (which can sometimes appear depending on the preparation methods) may move ahead of the supercoiled DNA. The rate at which the various forms move however can change using different electrophoresis conditions, for example linear DNA may run faster or slower than supercoiled DNA depending on conditions, [ 6 ] and the mobility of larger circular DNA may be more strongly affected than linear DNA by the pore size of the gel. [ 4 ] Unless supercoiled DNA markers are used, the size of a circular DNA like plasmid therefore may be more accurately gauged after it has been linearized by restriction digest .
DNA damage due to increased cross-linking will also reduce electrophoretic DNA migration in a dose-dependent way. [ 7 ] [ 8 ]
Circular DNA are more strongly affected by ethidium bromide concentration than linear DNA if ethidium bromide is present in the gel during electrophoresis. All naturally occurring DNA circles are underwound, but ethidium bromide which intercalates into circular DNA can change the charge, length, as well as the superhelicity of the DNA molecule, therefore its presence during electrophoresis can affect its movement in gel. Increasing ethidium bromide intercalated into the DNA can change it from a negatively supercoiled molecule into a fully relaxed form, then to positively coiled superhelix at maximum intercalation. [ 9 ] Agarose gel electrophoresis can be used to resolve circular DNA with different supercoiling topology.
The concentration of the gel determines the pore size of the gel which affects the migration of DNA. The resolution of the DNA changes with the percentage concentration of the gel. Increasing the agarose concentration of a gel reduces the migration speed and improves separation of smaller DNA molecules, while lowering gel concentration permits large DNA molecules to be separated. For a standard agarose gel electrophoresis, 0.7% gel concentration gives good separation or resolution of large 5–10kb DNA fragments, while 2% gel concentration gives good resolution for small 0.2–1kb fragments. Up to 3% gel concentration can be used for separating very tiny fragments but a vertical polyacrylamide gel would be more appropriate for resolving small fragments. High concentrations gel, however, requires longer run times (sometimes days) and high percentage gels are often brittle and may not set evenly. High percentage agarose gels should be run with PFGE or FIGE. Low percentage gels (0.1−0.2%) are fragile and may break. 1% gels are common for many applications. [ 10 ]
At low voltages, the rate of migration of the DNA is proportional to the voltage applied, i.e. the higher the voltage, the faster the DNA moves. However, in increasing electric field strength, the mobility of high-molecular-weight DNA fragments increases differentially, and the effective range of separation decreases and resolution therefore is lower at high voltage. For optimal resolution of DNA greater than 2kb in size in standard gel electrophoresis, 5 to 8 V/cm is recommended. [ 6 ] Voltage is also limited by the fact that it heats the gel and may cause the gel to melt if a gel is run at high voltage for a prolonged period, particularly for low-melting point agarose gel.
The mobility of DNA however may change in an unsteady field. In a field that is periodically reversed, the mobility of DNA of a particular size may drop significantly at a particular cycling frequency. [ 11 ] This phenomenon can result in band inversion whereby larger DNA fragments move faster than smaller ones in PFGE.
The negative charge of its phosphate backbone moves the DNA towards the positively charged anode during electrophoresis. However, the migration of DNA molecules in solution, in the absence of a gel matrix, is independent of molecular weight during electrophoresis, i.e. there is no separation by size without a gel matrix. [ 12 ] Hydrodynamic interaction between different parts of the DNA are cut off by streaming counterions moving in the opposite direction, so no mechanism exists to generate a dependence of velocity on length on a scale larger than screening length of about 10 nm. [ 11 ] This makes it different from other processes such as sedimentation or diffusion where long-ranged hydrodynamic interaction are important.
The gel matrix is therefore responsible for the separation of DNA by size during electrophoresis, however the precise mechanism responsible the separation is not entirely clear. A number of models exists for the mechanism of separation of biomolecules in gel matrix, a widely accepted one is the Ogston model which treats the polymer matrix as a sieve consisting of randomly distributed network of inter-connected pores. [ 13 ] A globular protein or a random coil DNA moves through the connected pores large enough to accommodate its passage, and the movement of larger molecules is more likely to be impeded and slowed down by collisions with the gel matrix, and the molecules of different sizes can therefore be separated in this process of sieving. [ 11 ]
The Ogston model however breaks down for large molecules whereby the pores are significantly smaller than size of the molecule. For DNA molecules of size greater than 1 kb, a reptation model (or its variants) is most commonly used. This model assumes that the DNA can crawl in a "snake-like" fashion (hence "reptation") through the pores as an elongated molecule. At higher electric field strength, this turned into a biased reptation model, whereby the leading end of the molecule become strongly biased in the forward direction, and this leading edge pulls the rest of the molecule along. In the fully biased mode, the mobility reached a saturation point and DNA beyond a certain size cannot be separated. [ 13 ] Perfect parallel alignment of the chain with the field however is not observed in practice as that would mean the same mobility for long and short molecules. [ 11 ] Further refinement of the biased reptation model takes into account of the internal fluctuations of the chain. [ 14 ]
The biased reptation model has also been used to explain the mobility of DNA in PFGE. The orientation of the DNA is progressively built up by reptation after the onset of a field, and the time it reached the steady state velocity is dependent on the size of the molecule. When the field is changed, larger molecules take longer to reorientate, it is therefore possible to discriminate between the long chains that cannot reach its steady state velocity from the short ones that travel most of the time in steady velocity. [ 14 ] Other models, however, also exist.
Real-time fluorescence microscopy of stained molecules showed more subtle dynamics during electrophoresis, with the DNA showing considerable elasticity as it alternately stretching in the direction of the applied field and then contracting into a ball, or becoming hooked into a U-shape when it gets caught on the polymer fibres. [ 15 ] [ 16 ] This observation may be termed the "caterpillar" model. [ 17 ] Other model proposes that the DNA gets entangled with the polymer matrix, and the larger the molecule, the more likely it is to become entangled and its movement impeded. [ 18 ]
The most common dye used to make DNA or RNA bands visible for agarose gel electrophoresis is ethidium bromide , usually abbreviated as EtBr. It fluoresces under UV light when intercalated into the major groove of DNA (or RNA). By running DNA through an EtBr-treated gel and visualizing it with UV light, any band containing more than ~20 ng DNA becomes distinctly visible. EtBr is a known mutagen , [ 19 ] and safer alternatives are available, such as GelRed , produced by Biotium , which binds to the minor groove. [ 20 ]
SYBR Green I is another dsDNA stain, produced by Invitrogen . It is more expensive, but 25 times more sensitive, and possibly safer than EtBr, though there is no data addressing its mutagenicity or toxicity in humans. [ 21 ]
SYBR Safe is a variant of SYBR Green that has been shown to have low enough levels of mutagenicity and toxicity to be deemed nonhazardous waste under U.S. Federal regulations. [ 22 ] It has similar sensitivity levels to EtBr, [ 22 ] but, like SYBR Green, is significantly more expensive. In countries where safe disposal of hazardous waste is mandatory, the costs of EtBr disposal can easily outstrip the initial price difference, however.
Since EtBr stained DNA is not visible in natural light, scientists mix DNA with negatively charged loading buffers before adding the mixture to the gel. Loading buffers are useful because they are visible in natural light (as opposed to UV light for EtBr stained DNA), and they co-sediment with DNA (meaning they move at the same speed as DNA of a certain length). Xylene cyanol and Bromophenol blue are common dyes found in loading buffers; they run about the same speed as DNA fragments that are 5000 bp and 300 bp in length respectively, but the precise position varies with percentage of the gel. Other less frequently used progress markers are Cresol Red and Orange G which run at about 125 bp and 50 bp, respectively.
Visualization can also be achieved by transferring DNA after SDS-PAGE to a nitrocellulose membrane followed by exposure to a hybridization probe . This process is termed Southern blotting .
For fluorescent dyes, after electrophoresis the gel is illuminated with an ultraviolet lamp (usually by placing it on a light box, while using protective gear to limit exposure to ultraviolet radiation). The illuminator apparatus mostly also contains imaging apparatus that takes an image of the gel, after illumination with UV radiation. The ethidium bromide fluoresces reddish-orange in the presence of DNA, since it has intercalated with the DNA. The DNA band can also be cut out of the gel, and can then be dissolved to retrieve the purified DNA.
The gel can then be photographed usually with a digital or polaroid camera. Although the stained nucleic acid fluoresces reddish-orange, images are usually shown in black and white (see figures). UV damage to the DNA sample can reduce the efficiency of subsequent manipulation of the sample, such as ligation and cloning. Shorter wavelength UV radiations (302 or 312 nm) cause greater damage, for example exposure for as little as 45 seconds can significantly reduce transformation efficiency . Therefore if the DNA is to be use for downstream procedures, exposure to a shorter wavelength UV radiations should be limited, instead higher-wavelength UV radiation (365 nm) which cause less damage should be used. Higher wavelength radiations however produces weaker fluorescence, therefore if it is necessary to capture the gel image, a shorter wavelength UV light can be used a short time. Addition of Cytidine or guanosine to the electrophoresis buffer at 1 mM concentration may protect the DNA from damage. [ 23 ] Alternatively, a blue light excitation source with a blue-excitable stain such as SYBR Green or GelGreen may be used.
Gel electrophoresis research often takes advantage of software-based image analysis tools, such as ImageJ . | https://en.wikipedia.org/wiki/Gel_electrophoresis_of_nucleic_acids |
Protein electrophoresis is a method for analysing the proteins in a fluid or an extract. The electrophoresis may be performed with a small volume of sample in a number of alternative ways with or without a supporting medium, namely agarose or polyacrylamide . Variants of gel electrophoresis include SDS-PAGE , free-flow electrophoresis , electrofocusing , isotachophoresis , affinity electrophoresis , immunoelectrophoresis , counterelectrophoresis , and capillary electrophoresis . Each variant has many subtypes with individual advantages and limitations. Gel electrophoresis is often performed in combination with electroblotting or immunoblotting to give additional information about a specific protein. [ 1 ]
SDS-PAGE, sodium dodecyl sulfate polyacrylamide gel electrophoresis, describes a collection of related techniques to separate proteins according to their electrophoretic mobility (a function of the molecular weight of a polypeptide chain) while in the denatured (unfolded) state. In most proteins, the binding of SDS to the polypeptide chain imparts an even distribution of charge per unit mass, thereby resulting in a fractionation by approximate size during electrophoresis. [ 2 ]
SDS is a strong detergent agent used to denature native proteins to unfolded, individual polypeptides . When a protein mixture is heated to 100 °C in presence of SDS, the detergent wraps around the polypeptide backbone. In this process, the intrinsic charges of polypeptides becomes negligible when compared to the negative charges contributed by SDS. Thus polypeptides after treatment become rod-like structures possessing a uniform charge density, that is same net negative charge per unit length. The electrophoretic mobilities of these proteins will be a linear function of the logarithms of their molecular weights. [ 3 ]
Native gels, also known as non-denaturing gels, analyze proteins that are still in their folded state. Thus, the electrophoretic mobility depends not only on the charge-to-mass ratio, but also on the physical shape and size of the protein. [ 4 ]
BN-PAGE is a native PAGE technique, where the Coomassie brilliant blue dye provides the necessary charges to the protein complexes for the electrophoretic separation. [ 5 ] [ 6 ] The disadvantage of Coomassie is that in binding to proteins it can act like a detergent causing complexes to dissociate . Another drawback is the potential quenching of chemoluminescence (e.g. in subsequent western blot detection or activity assays) or fluorescence of proteins with prosthetic groups (e.g. heme or chlorophyll ) or labelled with fluorescent dyes. [ citation needed ]
CN-PAGE (commonly referred to as Native PAGE) separates acidic water-soluble and membrane proteins in a polyacrylamide gradient gel. It uses no charged dye so the electrophoretic mobility of proteins in CN-PAGE (in contrast to the charge shift technique BN-PAGE) is related to the intrinsic charge of the proteins. [ 7 ] The migration distance depends on the protein charge, its size and the pore size of the gel. In many cases this method has lower resolution than BN-PAGE, but CN-PAGE offers advantages whenever Coomassie dye would interfere with further analytical techniques, for example it has been described as a very efficient microscale separation technique for FRET analyses. [ 8 ] Additionally, as CN-PAGE does not require the harsh conditions of BN-PAGE, it can retain the supramolecular assemblies of membrane protein complexes that would be dissociated in BN-PAGE. [ 7 ]
The folded protein complexes of interest separate cleanly and predictably without the risk of denaturation due to the specific properties of the polyacrylamide gel, electrophoresis buffer solution, electrophoretic equipment and standard parameters used. The separated proteins are continuously eluted into a physiological eluent and transported to a fraction collector. In four to five PAGE fractions each the different metal cofactors can be identified and absolutely quantified by high-resolution ICP-MS . The associated structures of the isolated metalloproteins in these fractions can be specifically determined by solution NMR spectroscopy. [ 9 ]
Most protein separations are performed using a "discontinuous" (or DISC) buffer system that significantly enhances the sharpness of the bands within the gel. During electrophoresis in a discontinuous gel system, an ion gradient is formed in the early stage of electrophoresis that causes all of the proteins to focus into a single sharp band. The formation of the ion gradient is achieved by choosing a pH value at which the ions of the buffer are only moderately charged compared to the SDS-coated proteins. These conditions provide an environment in which Kohlrausch's reactions determine the molar conductivity . As a result, SDS-coated proteins are concentrated to several fold in a thin zone of the order of 19 μm within a few minutes. At this stage all proteins migrate at the same migration speed by isotachophoresis . This occurs in a region of the gel that has larger pores so that the gel matrix does not retard the migration during the focusing or "stacking" event. [ 10 ] [ 11 ] Separation of the proteins by size is achieved in the lower, "resolving" region of the gel. The resolving gel typically has a much smaller pore size, which leads to a sieving effect that now determines the electrophoretic mobility of the proteins. At the same time, the separating part of the gel also has a pH value in which the buffer ions on average carry a greater charge, causing them to "outrun" the SDS-covered proteins and eliminate the ion gradient and thereby the stacking effect. [ citation needed ]
A very widespread discontinuous buffer system is the tris-glycine or " Laemmli " system that stacks at a pH of 6.8 and resolves at a pH of ~8.3-9.0. A drawback of this system is that these pH values may promote disulfide bond formation between cysteine residues in the proteins because the pKa of cysteine ranges from 8-9 and because reducing agent present in the loading buffer doesn't co-migrate with the proteins. Recent advances in buffering technology alleviate this problem by resolving the proteins at a pH well below the pKa of cysteine (e.g., bis-tris , pH 6.5) and include reducing agents (e.g. sodium bisulfite) that move into the gel ahead of the proteins to maintain a reducing environment. An additional benefit of using buffers with lower pH values is that the acrylamide gel is more stable at lower pH values, so the gels can be stored for long periods of time before use. [ 12 ] [ 13 ]
As voltage is applied, the anions (and negatively charged sample molecules) migrate toward the positive electrode (anode) in the lower chamber, the leading ion is Cl − ( high mobility and high concentration); glycinate is the trailing ion (low mobility and low concentration). SDS-protein particles do not migrate freely at the border between the Cl − of the gel buffer and the Gly − of the cathode buffer. Friedrich Kohlrausch found that Ohm's law also applies to dissolved electrolytes . Because of the voltage drop between the Cl − and Glycine-buffers, proteins are compressed (stacked) into micrometer thin layers. [ 14 ] The boundary moves through a pore gradient and the protein stack gradually disperses due to a frictional resistance increase of the gel matrix. Stacking and unstacking occurs continuously in the gradient gel, for every protein at a different position. For a complete protein unstacking the polyacrylamide-gel concentration must exceed 16% T. The two-gel system of "Laemmli" is a simple gradient gel. The pH discontinuity of the buffers is of no significance for the separation quality, and a "stacking-gel" with a different pH is not needed. [ 15 ]
The most popular protein stain is Coomassie brilliant blue . It is an anionic dye, which non-specifically binds to proteins. Proteins in the gel are fixed by acetic acid and simultaneously stained. The excess dye incorporated into the gel can be removed by destaining with the same solution without the dye. The proteins are detected as blue bands on a clear background. [ 16 ] [ 17 ]
When more sensitive method than staining by Coomassie is needed, silver staining is usually used. Silver staining is a sensitive procedure to detect trace amounts of proteins in gels, but can also visualize nucleic acid or polysaccharides. [ 17 ]
Visualization methods without using a dye such as Coomassie and silver are available on the market. [ 18 ] For example Bio-Rad Laboratories markets ”stain-free” gels for SDS-PAGE gel electrophoresis. Alternatively, reversible fluorescent dyes, such as those from Azure Biosystems such as AzureRed or Azure TotalStain Q can be used. [ 17 ] [ 18 ] [ 19 ]
Similarly as in nucleic acid gel electrophoresis, tracking dye is often used. Anionic dyes of a known electrophoretic mobility are usually included in the sample buffer. A very common tracking dye is Bromophenol blue . This dye is coloured at alkali and neutral pH and is a small negatively charged molecule that moves towards the anode. Being a highly mobile molecule it moves ahead of most proteins. [ 20 ]
In medicine , protein electrophoresis is a method of analysing the proteins mainly in blood serum . Before the widespread use of gel electrophoresis , protein electrophoresis was performed as free-flow electrophoresis (on paper) or as immunoelectrophoresis. [ citation needed ]
Traditionally, two classes of blood proteins are considered: serum albumin and globulin . They are generally equal in proportion, but albumin as a molecule is much smaller and lightly, negatively-charged, leading to an accumulation of albumin on the electrophoretic gel. A small band before albumin represents transthyretin (also named prealbumin). Some forms of medication or body chemicals can cause their own band, but it usually is small. Abnormal bands (spikes) are seen in monoclonal gammopathy of undetermined significance and multiple myeloma , and are useful in the diagnosis of these conditions. [ citation needed ]
The globulins are classified by their banding pattern (with their main representatives): [ citation needed ] | https://en.wikipedia.org/wiki/Gel_electrophoresis_of_proteins |
In molecular biology , gel extraction or gel isolation is a technique used to isolate a desired fragment of intact DNA from an agarose gel following agarose gel electrophoresis . After extraction, fragments of interest can be mixed, precipitated, and enzymatically ligated together in several simple steps. This process, usually performed on plasmids , is the basis for rudimentary genetic engineering .
After DNA samples are run on an agarose gel, extraction involves four basic steps: identifying the fragments of interest, isolating the corresponding bands, isolating the DNA from those bands, and removing the accompanying salts and stain.
To begin, UV light is shone on the gel in order to illuminate all the ethidium bromide -stained DNA. Care must be taken to avoid exposing the DNA to mutagenic radiation for longer than absolutely necessary. The desired band is identified and physically removed with a cover slip or razor blade . The removed slice of gel should contain the desired DNA inside. An alternative method, utilizing SYBR Safe DNA gel stain and blue-light illumination, avoids the DNA damage associated with ethidium bromide and UV light. [ 1 ]
Several strategies for isolating and cleaning the DNA fragment of interest exist.
Gel extraction kits are available from several major biotech manufacturers for a final cost of approximately 1–2 US$ per sample.
Protocols included in these kits generally call for the dissolution of the gel-slice in 3 volumes of chaotropic agent at 50 °C, followed by application of the solution to a spin-column (the DNA remains in the column), a 70% ethanol wash (the DNA remains in the column, salt and impurities are washed out), and elution of the DNA in a small volume (30 μL) of water or buffer. [ 2 ]
The gel fragment is placed in a dialysis tube that is permeable to fluids but impermeable to molecules at the size of DNA, thus preventing the DNA from passing through the membrane when soaked in TE buffer . An electric field is established around the tubing (in a way similar to gel electrophoresis) long enough so that the DNA is removed from the gel but remains in the tube. The tube solution can then be pipetted out and will contain the desired DNA with minimal background.
The traditional method of gel extraction involves creating a folded pocket of Parafilm wax paper and placing the agarose fragment inside. The agarose is physically compressed with a finger into a corner of the pocket, partially liquifying the gel and its contents. The liquid droplets can then be directed out of the pocket onto an exterior piece of Parafilm, where they are pipetted into a small tube. A butanol extraction removes the ethidium bromide stain, followed by a phenol/chloroform extraction of the cleaned DNA fragment.
The disadvantage of gel isolation is that background can only be removed if it can be physically identified using the UV light. If two bands are very close together, it can be hard to separate them without some contamination. In order to clearly identify the band of interest, further restriction digests may be necessary. Restriction sites unique to unwanted bands of similar size can aid in breaking up these potential contaminants. | https://en.wikipedia.org/wiki/Gel_extraction |
Gel permeation chromatography ( GPC ) [ 1 ] is a type of size-exclusion chromatography (SEC), that separates high molecular weight or colloidal analytes on the basis of size or diameter, typically in organic solvents. The technique is often used for the analysis of polymers . As a technique, SEC was first developed in 1955 by Lathe and Ruthven. [ 2 ] The term gel permeation chromatography can be traced back to J.C. Moore of the Dow Chemical Company who investigated the technique in 1964. [ 3 ] The proprietary column technology was licensed to Waters Corporation , who subsequently commercialized this technology in 1964. [ 4 ] GPC systems and consumables are now also available from a number of manufacturers. It is often necessary to separate polymers, both to analyze them as well as to purify the desired product.
When characterizing polymers, it is important to consider their size distribution and dispersity ( Đ ) as well their molecular weight . Polymers can be characterized by a variety of definitions for molecular weight including the number average molecular weight (M n ), the weight average molecular weight (M w ) (see molar mass distribution ), the size average molecular weight (M z ), or the viscosity molecular weight (M v ). GPC allows for the determination of Đ as well as M v and, based on other data, the M n , M w , and M z can be determined.
GPC is a type of chromatography in which analytes are separated, based on their size or hydrodynamic volume ( radius of gyration ). This differs from other chromatographic techniques, which depend upon chemical or physical interactions between the mobile and stationary phases to separate analytes. [ 5 ] Separation occurs via the use of porous gel beads packed inside a column (see stationary phase (chemistry) ). The principle of separation relies on the differential exclusion or inclusion of the macromolecules by the porous gel stationary phase. Larger molecules are excluded from entering the pores and elute earlier, while smaller molecules can enter the pores, thus staying longer inside the column. The entire process takes place without any interaction of the analytes with the surface of the stationary phase.
The smaller analytes relative to the pore sizes can permeate these pores and spend more time inside the gel particles, increasing their retention time. Conversely, larger analytes relative to the pores sizes spend little if any time inside the column, hence they elute sooner. Each type of column has a range of molecular weights that can be separated, according to their pores sizes.
If an analyte is too large relative to the column's pores, it will not be retained at all and will be totally excluded; conversely, if the analyte is small relative to the pores sizes, it will be totally permeating. Analytes that are totally excluded, elute with the free volume outside around the particles (V o ), the total exclusion limit, while analytes that are completely delayed, elute with the solvent, marking the total permeation volume of the column, including also the solvent held inside the pores (V i ). The total volume can be considered by the following equation, where V g is the volume of the polymer gel and V t is the total volume: [ 5 ] V t = V g + V i + V o {\displaystyle Vt=Vg+Vi+Vo}
As can be inferred, there is a limited range of molecular weights that can be separated by each column, therefore the size of the pores for the packing should be chosen according to the range of molecular weight of analytes to be separated. For polymer separations the pore sizes should be on the order of the polymers being analyzed. If a sample has a broad molecular weight range it may be necessary to use several GPC columns with varying pores volumes in tandem to resolve the sample fully.
GPC is often used to determine the relative molecular weight of polymer samples as well as the distribution of molecular weights. What GPC truly measures is the molecular volume and shape function as defined by the intrinsic viscosity . If comparable standards are used, this relative data can be used to determine molecular weights within ± 5% accuracy. Polystyrene standards with dispersities of less than 1.2 are typically used to calibrate the GPC. [ 6 ] Unfortunately, polystyrene tends to be a very linear polymer and therefore as a standard it is only useful to compare it to other polymers that are known to be linear and of relatively the same size.
Gel permeation chromatography is conducted almost exclusively in chromatography systems. The experimental design is not much different from other techniques of High Performance liquid chromatography . Samples are dissolved in an appropriate solvent, in the case of GPC these tend to be organic solvents and after filtering the solution it is injected onto a column. The separation of multi-component mixture takes place in the column. The constant supply of fresh eluent to the column is accomplished by the use of a pump. Since most analytes are not visible to the naked eye a detector is needed. Often multiple detectors are used to gain additional information about the polymer sample. The availability of a detector makes the fractionation convenient and accurate.
Gels are used as stationary phase for GPC. The pore size of a gel must be carefully controlled in order to be able to apply the gel to a given separation. Other desirable properties of the gel forming agent are the absence of ionizing groups and, in a given solvent, low affinity for the substances to be separated. Commercial gels like PLgel & Styragel (cross-linked polystyrene-divinylbenzene), [ 7 ] [ 8 ] LH-20 (hydroxypropylated Sephadex ), [ 9 ] Bio-Gel ( cross-linked polyacrylamide ), HW-20 & HW-40 (hydroxylated methacrylic polymer ), [ 10 ] and agarose gel are often used based on different separation requirements. [ 11 ]
The column used for GPC is filled with a microporous packing material. The column is filled with the gel. Since the total penetration volume is the maximum volume permeated by the analytes, and there is no retention on the surface of the stationary phase, the total column volume is usually large, relatively to the sample volume.
The eluent (mobile phase) should be the appropriate solvent to dissolve the polymer, should not interfere with the response of the polymer analyzed, and should wet the packing surface and make it inert to interactions with the polymers. The most common eluents for polymers that dissolve at room temperature GPC are tetrahydrofuran (THF), o -dichlorobenzene and trichlorobenzene at 130–150 °C for crystalline polyalkynes and hexafluoroisopropanol (HFIP) for crystalline condensation polymers such as polyamides and polyesters . [ 12 ]
There are two types of pumps available for uniform delivery of relatively small liquid volumes for GPC: piston or peristaltic pumps. The delivery of a constant flow free of fluctuations is especially important to the precision of the GPC analysis, as the flow-rate is used for the calibration of the molecular weight, or diameter. [ 13 ]
In GPC, the concentration by weight of polymer in the eluting solvent may be monitored continuously with a detector. There are many detector types available and they can be divided into two main categories. The first is concentration sensitive detectors which includes UV-VIS absorption, differential refractometer (DRI) or refractive index (RI) detectors, infrared (IR) absorption and density detectors. The second category is molecular weight sensitive detectors, which include low angle light scattering detectors (LALLS) and multi angle light scattering (MALS). [ 14 ] The resulting chromatogram is therefore a weight distribution of the polymer as a function of retention volume.
The most sensitive detector is the differential UV photometer and the most common detector is the differential refractometer (DRI). When characterizing copolymer, it is necessary to have two detectors in series. [ 6 ] For accurate determinations of copolymer composition at least two of those detectors should be concentration detectors. [ 14 ] The determination of most copolymer compositions is done using UV and RI detectors, although other combinations can be used. [ 15 ]
Gel permeation chromatography (GPC) has become the most widely used technique for analyzing polymer samples in order to determine their molecular weights and weight distributions. Examples of GPC chromatograms of polystyrene samples with their molecular weights and dispersities are shown on the left.
Benoit and co-workers [ 16 ] proposed that the hydrodynamic volume, V η , which is proportional to the product of [η] and M, where [η] is the intrinsic viscosity of the polymer in the SEC eluent, may be used as the universal calibration parameter. If the Mark–Houwink–Sakurada constants K and α are known (see Mark–Houwink equation ), a plot of log [η]M versus elution volume (or elution time) for a particular solvent, column and instrument provides a universal calibration curve which can be used for any polymer in that solvent. By determining the retention volumes (or times) of monodisperse polymer standards (e.g. solutions of monodispersed polystyrene in THF), a calibration curve can be obtained by plotting the logarithm of the molecular weight versus the retention time or volume. Once the calibration curve is obtained, the gel permeation chromatogram of any other polymer can be obtained in the same solvent and the molecular weights (usually M n and M w ) and the complete molecular weight distribution for the polymer can be determined. A typical calibration curve is shown to the right and the molecular weight from an unknown sample can be obtained from the calibration curve.
As a separation technique, GPC has many advantages. First of all, it has a well-defined separation time due to the fact that there is a final elution volume for all unretained analytes. Additionally, GPC can provide narrow bands, although this aspect of GPC is more difficult for polymer samples that have broad ranges of molecular weights present. Finally, since the analytes do not interact chemically or physically with the column, there is a lower chance for analyte loss to occur. [ 5 ] For investigating the properties of polymer samples in particular, GPC can be very advantageous. GPC provides a more convenient method of determining the molecular weights of polymers. In fact most samples can be thoroughly analyzed in an hour or less. [ 17 ] Other methods used in the past were fractional extraction and fractional precipitation. As these processes were quite labor-intensive molecular weights and mass distributions typically were not analyzed. [ 18 ] Therefore, GPC has allowed for the quick and relatively easy estimation of molecular weights and distribution for polymer samples
There are disadvantages to GPC, however. First, there is a limited number of peaks that can be resolved within the short time scale of the GPC run. Also, as a technique GPC requires around at least a 10% difference in molecular weight for a reasonable resolution of peaks to occur. [ 5 ] In regards to polymers, the molecular masses of most of the chains will be too close for the GPC separation to show anything more than broad peaks. Another disadvantage of GPC for polymers is that filtrations must be performed before using the instrument to prevent dust and other particulates from ruining the columns and interfering with the detectors. Although useful for protecting the instrument, there is the possibility of the pre-filtration of the sample removing higher molecular weight sample before it can be loaded on the column. Another possibility to overcome these issues is the separation by field-flow fractionation (FFF).
Field-flow fractionation (FFF) can be considered as an alternative to GPC, especially when particles or high molar mass polymers cause clogging of the column, shear degradation is an issue or agglomeration takes place but cannot be made visible. FFF is separation in an open flow channel without having a static phase involved so no interactions occur. With one field-flow fractionation version, thermal field-flow fractionation , separation of polymers having the same size but different chemical compositions is possible. [ 19 ] | https://en.wikipedia.org/wiki/Gel_permeation_chromatography |
In polymer chemistry , the gel point is an abrupt change in the viscosity of a solution containing polymerizable components. At the gel point, a solution undergoes gelation , as reflected in a loss in fluidity. After the monomer/polymer solution has passed the gel point, internal stress builds up in the gel phase, which can lead to volume shrinkage. Gelation is characteristic of polymerizations that include crosslinkers that can form 2- or 3-dimensional networks. For example, the condensation of a dicarboxylic acid and a triol will give rise to a gel whereas the same dicarboxylic acid and a diol will not. The gel is often a small percentage of the mixture, even though it greatly influences the properties of the bulk. [ 1 ]
An infinite polymer network appears at the gel point. Assuming that it is possible to measure the extent of reaction, p {\displaystyle p} , defined as the fraction of monomers that appear in cross-links , the gel point can be determined. [ 2 ] The critical extent of reaction p c {\displaystyle p_{c}} for the gel point to be formed is given by:
For example, a polymer with N≈200 is able to reach the gel point with only 0.5% of monomers reacting. This shows the ease at which polymers are able to form infinite networks.
The critical extent of reaction for gelation can be determined as a function of the properties of the monomer mixture, r {\displaystyle r} , p {\displaystyle p} , and f {\displaystyle f} : [ 3 ] | https://en.wikipedia.org/wiki/Gel_point |
The gel point of petroleum products is the temperature at which the liquids gel so they no longer flow by gravity or can be pumped through fuel lines. This phenomenon happens when the petroleum product reaches a low enough temperature to precipitate interlinked paraffin wax crystals throughout the fluid .
More highly distilled petroleum products have fewer paraffins and will have a lower gel point. On the other hand, the gel point of crude oil is dependent upon the composition of the crude oil as some crude oils contain more or less components that dissolve the paraffins. In some cases the gel point of a crude oil may be correlated from the pour point . [ 1 ] [ 2 ]
The gel points of some common petroleum products are as follows:
For the petroleum product to flow again, it needs to be brought above the gel point temperature to the ungel point, which is typically near its pour point. However, without stirring the paraffin waxes may still remain in crystal form so the fuel may have to be warmed further to its remix temperature to completely re-dissolve the waxes.
Anti-gel additives are sometimes added to petroleum products where cold temperature may affect their use. The additives act to reduce the formation of wax crystals in the product, thereby lowering the pour point and the gel point of the fuel. Anti-gel additives may not necessarily affect the cloud point .
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gel_point_(petroleum) |
In polymer chemistry , gelation ( gel transition ) is the formation of a gel from a system with polymers . [ 1 ] [ 2 ] Branched polymers can form links between the chains, which lead to progressively larger polymers. As the linking continues, larger branched polymers are obtained and at a certain extent of the reaction, links between the polymer result in the formation of a single macroscopic molecule . At that point in the reaction, which is defined as gel point , the system loses fluidity and viscosity becomes very large. The onset of gelation, or gel point, is accompanied by a sudden increase in viscosity. [ 3 ] This "infinite" sized polymer is called the gel or network, which does not dissolve in the solvent, but can swell in it. [ 4 ]
Gelation is promoted by gelling agents .
Gelation can occur either by physical linking or by chemical crosslinking . While the physical gels involve physical bonds, chemical gelation involves covalent bonds. The first quantitative theories of chemical gelation were formulated in the 1940s by Flory and Stockmayer . Critical percolation theory was successfully applied to gelation in 1970s. A number of growth models (diffusion limited aggregation, cluster-cluster aggregation, kinetic gelation) were developed in the 1980s to describe the kinetic aspects of aggregation and gelation. [ 5 ]
It is important to be able to predict the onset of gelation, since it is an irreversible process that dramatically changes the properties of the system.
According to the Carothers equation number-average degree of polymerization D P n {\displaystyle DP_{n}} is given by
D P n = 2 2 − p . f a v {\displaystyle DP_{n}={\frac {2}{2-p.f_{av}}}}
where p {\displaystyle p} is the extent of the reaction and f a v {\displaystyle f_{av}} is the average functionality of reaction mixture. For the gel D P n {\displaystyle DP_{n}} can be considered to be infinite, thus the critical extent of the reaction at the gel point is found as
p c = 2 f a v {\displaystyle p_{c}={\frac {2}{f_{av}}}}
If p {\displaystyle p} is greater or equal to p c {\displaystyle p_{c}} , gelation occurs.
Flory and Stockmayer used a statistical approach to derive an expression to predict the gel point by calculating when D P n {\displaystyle DP_{n}} approaches infinite size. The statistical approach assumes that (1) the reactivity of the functional groups of the same type is the same and independent of the molecular size and (2) there are no intramolecular reactions between the functional groups on the same molecule. [ 6 ] [ 7 ]
Consider the polymerization of bifunctional molecules A − A {\displaystyle A-A} , B − B {\displaystyle B-B} and multifunctional A f {\displaystyle A_{f}} , where f {\displaystyle f} is the functionality. The extends of the functional groups are p A {\displaystyle p_{A}} and p B {\displaystyle p_{B}} , respectively. The ratio of all A groups, both reacted and unreacted, that are part of branched units, to the total number of A groups in the mixture is defined as ρ {\displaystyle \rho } . This will lead to the following reaction
A − A + B − B + A f → A f − 1 − ( B − B A − A ) n B − B A − A f − 1 {\displaystyle A-A+B-B+A_{f}\rightarrow A_{f-1}-(B-BA-A)_{n}B-BA-A_{f-1}}
The probability of obtaining the product of the reaction above is given by p A [ p B ( 1 − ρ ) p A ] n p B ρ {\displaystyle p_{A}[p_{B}(1-\rho )p_{A}]^{n}p_{B}\rho } , since the probability that a B group reach with a branched unit is p B ρ {\displaystyle p_{B}\rho } and the probability that a B group react with non-branched A is p B ( 1 − ρ ) {\displaystyle p_{B}(1-\rho )} .
This relation yields to an expression for the extent of reaction of A functional groups at the gel point
p c = 1 { r [ 1 + ρ ( f − 2 ) ] } 1 / 2 {\displaystyle p_{c}={\frac {1}{\{r[1+\rho (f-2)]\}^{1/2}}}}
where r is the ratio of all A groups to all B groups. If more than one type of multifunctional branch unit is present and average f {\displaystyle f} value is used for all monomer molecules with functionality greater than 2.
Note that the relation does not apply for reaction systems containing monofunctional reactants and/or both A and B type of branch units.
Gelation of polymers can be described in the framework of the Erdős–Rényi model or the Lushnikov model , which answers the question when a giant component arises. [ 8 ]
The structure of a gel network can be conceptualised as a random graph. This analogy is exploited to calculate the gel point and gel fraction for monomer precursors with arbitrary types of functional groups. Random graphs can be used to derive analytical expressions for simple polymerisation mechanisms, such as step-growth polymerisation, or alternatively, they can be combined with a system of rate equations that are integrated numerically. | https://en.wikipedia.org/wiki/Gelation |
In algebra , the Gelfand–Kirillov dimension (or GK dimension ) of a right module M over a k -algebra A is:
where the supremum is taken over all finite-dimensional subspaces V ⊂ A {\displaystyle V\subset A} and M 0 ⊂ M {\displaystyle M_{0}\subset M} .
An algebra is said to have polynomial growth if its Gelfand–Kirillov dimension is finite.
Given a right module M over the Weyl algebra A n {\displaystyle A_{n}} , the Gelfand–Kirillov dimension of M over the Weyl algebra coincides with the dimension of M , which is by definition the degree of the Hilbert polynomial of M . This enables to prove additivity in short exact sequences for the Gelfand–Kirillov dimension and finally to prove Bernstein's inequality , which states that the dimension of M must be at least n . This leads to the definition of holonomic D-modules as those with the minimal dimension n , and these modules play a great role in the geometric Langlands program .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gelfand–Kirillov_dimension |
In mathematics , the exponential of pi e π , [ 1 ] also called Gelfond's constant, [ 2 ] is the real number e raised to the power π .
Its decimal expansion is given by:
Like both e and π , this constant is both irrational and transcendental . This follows from the Gelfond–Schneider theorem , which establishes a b to be transcendental, given that a is algebraic and not equal to zero or one and b is algebraic but not rational . We have e π = ( e i π ) − i = ( − 1 ) − i , {\displaystyle e^{\pi }=(e^{i\pi })^{-i}=(-1)^{-i},} where i is the imaginary unit . Since − i is algebraic but not rational, e π is transcendental. The numbers π and e π are also known to be algebraically independent over the rational numbers , as demonstrated by Yuri Nesterenko . [ 3 ] It is not known whether e π is a Liouville number. [ 4 ] The constant was mentioned in Hilbert's seventh problem alongside the Gelfond–Schneider constant 2 √ 2 and the name "Gelfond's constant" stems from Soviet mathematician Alexander Gelfond . [ 5 ]
The constant e π appears in relation to the volumes of hyperspheres :
The volume of an n-sphere with radius R is given by: V n ( R ) = π n 2 R n Γ ( n 2 + 1 ) , {\displaystyle V_{n}(R)={\frac {\pi ^{\frac {n}{2}}R^{n}}{\Gamma \left({\frac {n}{2}}+1\right)}},} where Γ is the gamma function . Considering only unit spheres ( R = 1 ) yields: V n ( 1 ) = π n 2 Γ ( n 2 + 1 ) , {\displaystyle V_{n}(1)={\frac {\pi ^{\frac {n}{2}}}{\Gamma \left({\frac {n}{2}}+1\right)}},} Any even-dimensional 2 n-sphere now gives: V 2 n ( 1 ) = π n Γ ( n + 1 ) = π n n ! {\displaystyle V_{2n}(1)={\frac {\pi ^{n}}{\Gamma (n+1)}}={\frac {\pi ^{n}}{n!}}} summing up all even-dimensional unit sphere volumes and utilizing the series expansion of the exponential function gives: [ 6 ] ∑ n = 0 ∞ V 2 n ( 1 ) = ∑ n = 0 ∞ π n n ! = exp ( π ) = e π . {\displaystyle \sum _{n=0}^{\infty }V_{2n}(1)=\sum _{n=0}^{\infty }{\frac {\pi ^{n}}{n!}}=\exp(\pi )=e^{\pi }.} We also have:
If one defines k 0 = 1 / √ 2 and k n + 1 = 1 − 1 − k n 2 1 + 1 − k n 2 {\displaystyle k_{n+1}={\frac {1-{\sqrt {1-k_{n}^{2}}}}{1+{\sqrt {1-k_{n}^{2}}}}}} for n > 0 , then the sequence ( 4 / k n + 1 ) 2 − n {\displaystyle (4/k_{n+1})^{2^{-n}}} converges rapidly to e π . [ 7 ]
The number e π √ 163 is known as Ramanujan's constant . Its decimal expansion is given by:
which turns out to be very close to the integer 640320 3 + 744 : This is an application of Heegner numbers , where 163 is the Heegner number in question. This number was discovered in 1859 by the mathematician Charles Hermite . [ 8 ] In a 1975 April Fool article in Scientific American magazine, [ 9 ] "Mathematical Games" columnist Martin Gardner made the hoax claim that the number was in fact an integer, and that the Indian mathematical genius Srinivasa Ramanujan had predicted it—hence its name. Ramanujan's constant is also a transcendental number.
The coincidental closeness, to within one trillionth of the number 640320 3 + 744 is explained by complex multiplication and the q -expansion of the j-invariant , specifically: j ( ( 1 + − 163 ) / 2 ) = ( − 640 320 ) 3 {\displaystyle j((1+{\sqrt {-163}})/2)=(-640\,320)^{3}} and, ( − 640 320 ) 3 = − e π 163 + 744 + O ( e − π 163 ) {\displaystyle (-640\,320)^{3}=-e^{\pi {\sqrt {163}}}+744+O\left(e^{-\pi {\sqrt {163}}}\right)} where O ( e - π √ 163 ) is the error term, O ( e − π 163 ) = − 196 884 / e π 163 ≈ − 196 884 / ( 640 320 3 + 744 ) ≈ − 0.000 000 000 000 75 {\displaystyle {\displaystyle O\left(e^{-\pi {\sqrt {163}}}\right)=-196\,884/e^{\pi {\sqrt {163}}}\approx -196\,884/(640\,320^{3}+744)\approx -0.000\,000\,000\,000\,75}} which explains why e π √ 163 is 0.000 000 000 000 75 below 640320 3 + 744 .
(For more detail on this proof, consult the article on Heegner numbers .)
The number e π − π is also very close to an integer, its decimal expansion being given by:
The explanation for this seemingly remarkable coincidence was given by A. Doman in September 2023, and is a result of a sum related to Jacobi theta functions as follows: ∑ k = 1 ∞ ( 8 π k 2 − 2 ) e − π k 2 = 1. {\displaystyle \sum _{k=1}^{\infty }\left(8\pi k^{2}-2\right)e^{-\pi k^{2}}=1.} The first term dominates since the sum of the terms for k ≥ 2 {\displaystyle k\geq 2} total ∼ 0.0003436. {\displaystyle \sim 0.0003436.} The sum can therefore be truncated to ( 8 π − 2 ) e − π ≈ 1 , {\displaystyle \left(8\pi -2\right)e^{-\pi }\approx 1,} where solving for e π {\displaystyle e^{\pi }} gives e π ≈ 8 π − 2. {\displaystyle e^{\pi }\approx 8\pi -2.} Rewriting the approximation for e π {\displaystyle e^{\pi }} and using the approximation for 7 π ≈ 22 {\displaystyle 7\pi \approx 22} gives e π ≈ π + 7 π − 2 ≈ π + 22 − 2 = π + 20. {\displaystyle e^{\pi }\approx \pi +7\pi -2\approx \pi +22-2=\pi +20.} Thus, rearranging terms gives e π − π ≈ 20. {\displaystyle e^{\pi }-\pi \approx 20.} Ironically, the crude approximation for 7 π {\displaystyle 7\pi } yields an additional order of magnitude of precision. [ 10 ]
The decimal expansion of π e is given by:
It is not known whether or not this number is transcendental. Note that, by Gelfond–Schneider theorem , we can only infer definitively whether or not a b is transcendental if a and b are algebraic ( a and b are both considered complex numbers ).
In the case of e π , we are only able to prove this number transcendental due to properties of complex exponential forms and the above equivalency given to transform it into (−1) − i , allowing the application of Gelfond–Schneider theorem.
π e has no such equivalence, and hence, as both π and e are transcendental, we can not use the Gelfond–Schneider theorem to draw conclusions about the transcendence of π e . However the currently unproven Schanuel's conjecture would imply its transcendence. [ 11 ]
Using the principal value of the complex logarithm i i = ( e i π / 2 ) i = e − π / 2 = ( e π ) − 1 / 2 {\displaystyle i^{i}=(e^{i\pi /2})^{i}=e^{-\pi /2}=(e^{\pi })^{-1/2}} The decimal expansion of is given by:
Its transcendence follows directly from the transcendence of e π and directly from Gelfond–Schneider theorem. | https://en.wikipedia.org/wiki/Gelfond's_constant |
The Gelfond–Schneider constant or Hilbert number [ 1 ] is two to the power of the square root of two :
which was proved to be a transcendental number by Rodion Kuzmin in 1930. [ 2 ] In 1934, Aleksandr Gelfond and Theodor Schneider independently proved the more general Gelfond–Schneider theorem , [ 3 ] which solved the part of Hilbert's seventh problem described below.
The square root of the Gelfond–Schneider constant is the transcendental number
This same constant can be used to prove that "an irrational elevated to an irrational power may be rational", even without first proving its transcendence. The proof proceeds as follows: either 2 2 {\displaystyle {\sqrt {2}}^{\sqrt {2}}} is a rational which proves the theorem, or it is irrational (as it turns out to be) and then
is an irrational to an irrational power that is a rational which proves the theorem. [ 4 ] [ 5 ] The proof is not constructive , as it does not say which of the two cases is true, but it is much simpler than Kuzmin's proof.
Part of the seventh of Hilbert's twenty-three problems posed in 1900 was to prove, or find a counterexample to, the claim that a b is always transcendental for algebraic a ≠ 0, 1 and irrational algebraic b . In the address he gave two explicit examples, one of them being the Gelfond–Schneider constant 2 √ 2 .
In 1919, he gave a lecture on number theory and spoke of three conjectures: the Riemann hypothesis , Fermat's Last Theorem , and the transcendence of 2 √ 2 . He mentioned to the audience that he didn't expect anyone in the hall to live long enough to see a proof of this result. [ 6 ] But the proof of this number's transcendence was published by Kuzmin in 1930, [ 2 ] well within Hilbert 's own lifetime. Namely, Kuzmin proved the case where the exponent b is a real quadratic irrational , which was later extended to an arbitrary algebraic irrational b by Gelfond and by Schneider. | https://en.wikipedia.org/wiki/Gelfond–Schneider_constant |
In mathematics , the Gelfond–Schneider theorem establishes the transcendence of a large class of numbers.
It was originally proved independently in 1934 by Aleksandr Gelfond and Theodor Schneider . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The values of a and b are not restricted to real numbers ; complex numbers are allowed (here complex numbers are not regarded as rational when they have an imaginary part not equal to 0, even if both the real and imaginary parts are rational).
In general, a b = exp( b log a ) is multivalued , where log stands for the complex natural logarithm . (This is the multivalued inverse of the exponential function exp.) This accounts for the phrase "any value of" in the theorem's statement.
An equivalent formulation of the theorem is the following: if α and γ are nonzero algebraic numbers, and we take any non-zero logarithm of α , then (log γ )/(log α ) is either rational or transcendental. This may be expressed as saying that if log α , log γ are linearly independent over the rationals, then they are linearly independent over the algebraic numbers. The generalisation of this statement to more general linear forms in logarithms of several algebraic numbers is in the domain of transcendental number theory .
If the restriction that a and b be algebraic is removed, the statement does not remain true in general. For example, ( 2 2 ) 2 = 2 2 ⋅ 2 = 2 2 = 2. {\displaystyle {\left({\sqrt {2}}^{\sqrt {2}}\right)}^{\sqrt {2}}={\sqrt {2}}^{{\sqrt {2}}\cdot {\sqrt {2}}}={\sqrt {2}}^{2}=2.} Here, a is √ 2 √ 2 , which (as proven by the theorem itself) is transcendental rather than algebraic. Similarly, if a = 3 and b = (log 2)/(log 3) , which is transcendental, then a b = 2 is algebraic. A characterization of the values for a and b which yield a transcendental a b is not known.
Kurt Mahler proved the p -adic analogue of the theorem: if a and b are in C p , the completion of the algebraic closure of Q p , and they are algebraic over Q , and if | a − 1 | p < 1 {\displaystyle |a-1|_{p}<1} and | b − 1 | p < 1 , {\displaystyle |b-1|_{p}<1,} then ( log p a ) / ( log p b ) {\displaystyle (\log _{p}a)/(\log _{p}b)} is either rational or transcendental, where log p is the p -adic logarithm function .
The transcendence of the following numbers follows immediately from the theorem:
The Gelfond–Schneider theorem answers affirmatively Hilbert's seventh problem . | https://en.wikipedia.org/wiki/Gelfond–Schneider_theorem |
GemIdent is an interactive image recognition program that identifies regions of interest in images and photographs. It is specifically designed for images with few colors, where the objects of interest look alike with small variation. For example, color image segmentation of:
GemIdent also packages data analysis tools to investigate spatial relationships among the objects identified.
GemIdent was developed at Stanford University by Adam Kapelner from June 2006 until January 2007 in the lab of Dr. Peter Lee under the tutelage of Professor Susan Holmes. [ 1 ] The concept was inspired by data Kohrt et al. [ 2 ] who analyzed immune profiles of lymph nodes in breast cancer patients. Hence, GemIdent works well when identifying cells in IHC -stained tissue imaged via automated light microscopy when the nuclear background stain and membrane/cytoplasmic stain are well-defined. In 2008, it was adapted to support multispectral imaging techniques. [ 3 ]
GemIdent uses supervised learning to perform automated
identification of regions of interest in the images. Therefore, the user must do a substantial amount of work first supplying the relevant colors, then pointing out examples of the objects or regions themselves as well as negatives ( training set creation).
When a user clicks on a pixel, many scores are generated using the surrounding color information via Mahalanobis Ring Score attribute generation (read the JSS paper for a detailed exposition). These scores are then used to build a random forest machine-learning classifier which will then classify pixels in any given image.
After classification, there may be mistakes. The user can return to training and point out the specific mistakes and then reclassify. These training-classifying-retraining-reclassifying iterations (considered interactive boosting ) can result in a highly accurate segmentation.
In 2010, Setiadi et al. [ 4 ] analyzed histological sections of lymph nodes looking at spatial densities of B and T cells. "Cell numbers do not capture the full range of information encoded within tissues".
The Java source code is now open source under GPL2 . [ 5 ]
The raw photograph (left), a superimposed mask showing the pixel classification results (center), and finally the photograph is marked with the centroids of the object of interest - the oranges (right)
The raw microscopic image of a stained lymph node (left) from the Kohrt study, [ 2 ] a superimposed mask showing the pixel classification results (center), and finally the image is marked with the centroids of the object of interest - the cancer nuclei (right)
This example illustrates GemIdent's ability to find multiple phenotypes in the same image: the raw microscopic image of a stained lymph node (top left) from the Kohrt study, [ 2 ] a superimposed mask showing the pixel classification results (top right), and finally the image marked with the centroids of the objects of interest - the cancer nuclei (in green stars), the T-cells (in yellow stars), and non-specific background nuclei (in cyan stars).
The command-line data analysis and visualization interface in action analyzing results of a classification of a lymph node from the Kohrt study. [ 2 ] The histogram displays the distribution of distances from T-cells to neighboring cancer cells. The binary image of cancer membrane is the result of a pixel-only classification. The open PDF document is the autogenerated report of the analysis which includes a thumbnail view of the entire lymph node, counts and Type I error rates for all phenotypes , as well as a transcript of the analyses performed. | https://en.wikipedia.org/wiki/GemIdent |
In chemistry , the descriptor geminal (from Latin gemini ' twins ' [ 1 ] ) refers to the relationship between two atoms or functional groups that are attached to the same atom. A geminal diol , for example, is a diol (a molecule that has two alcohol functional groups) attached to the same carbon atom, as in methanediol . Also the shortened prefix gem may be applied to a chemical name to denote this relationship, as in a gem -dibromide for "geminal dibromide". [ citation needed ]
The concept is important in many branches of chemistry, including synthesis and spectroscopy , because functional groups attached to the same atom often behave differently from when they are separated. Geminal diols, for example, are easily converted to ketones or aldehydes with loss of water. [ 2 ]
The related term vicinal refers to the relationship between two functional groups that are attached to adjacent atoms. This relative arrangement of two functional groups can also be described by the descriptors α and β .
In 1 H NMR spectroscopy , the coupling of two hydrogen atoms on the same carbon atom is called a geminal coupling. It occurs only when two hydrogen atoms on a methylene group differ stereochemically from each other. The geminal coupling constant is referred to as 2 J since the hydrogen atoms couple through two bonds. Depending on the other substituents, the geminal coupling constant takes values between −23 and +42 Hz. [ 3 ] [ 4 ]
The following example shows the conversion of a cyclohexyl methyl ketone to a gem -dichloride through a reaction with phosphorus pentachloride. This gem -dichloride can then be used to synthesize an alkyne . | https://en.wikipedia.org/wiki/Geminal |
Gems of the Galaxy Zoos (Zoogems) was a gap-filler project which used the Hubble Space Telescope to take images of unusual objects found by volunteers classifying data from both Galaxy Zoo (GZ) and Radio Galaxy Zoo (RGZ). [ 1 ] [ 2 ] Between the HSTs' main observations, there is a short time that objects within that field of view can be imaged using gaps which last approximately 12 - 25 mins. [ 3 ] The Zoogems project sought to use those small observation gaps to image 300 candidates taken from the two Zoos in order to better study and comprehend them. [ 1 ] Starting observations in May 2018, HST Proposal 15445 had by the end of September 2023 imaged 193 of the 300 candidates with many of them having near 11 minute exposures. [ 2 ]
GZ is an ongoing crowdsourced astronomy project which invites people to assist in the morphological classification of a large number of galaxies. [ 4 ] Initially, many of the objects that were imaged had been posted on the GZ Forum and Talk pages from Summer 2007 through various versions until 2017.
The project Radio Galaxy Zoo started in December 2013, seeking to locate supermassive black holes . [ 5 ] The science team wanted to identify black hole/jet pairs and associate them with their host galaxies. As a result of citizens' classifications, many unusual candidates visible in radio frequencies were flagged for further studies.
Through public analysis of more than 900,000 objects, volunteers collected a " menagerie of weird and wonderful galaxies" which few had seen before. [ 6 ] The original proposal estimated that there were 1100 targets available, yet only 300 observation slots, so the public were asked to vote for which targets should be in the final list. Voting took place in February 2018 in order to meet the proposal's deadline of 28 February. [ 1 ]
Project lead Dr. William Keel said in an interview on the University of Alabama site that Zoogems addressed a range of studies and that this happens rarely with galaxies. [ 3 ] He explained that after volunteers had sifted through the images of a million galaxies, they had found examples of oddities and rarities. Further, by using data from HST, these objects that would not normally merit an individual project, put together would form an interesting study. Whenever a 20-minute gap in the HST schedule appears, software will go to the list of objects and see which is closest. [ 3 ]
As with all HST gap-filler observations, the Wide-Field Camera mode of the Advanced Camera for Surveys is used for its larger field-of-view. [ 1 ] The total exposure time of 674 seconds is made by a pair of two 337 second exposures, the same for all the gap-filler observations. [ 1 ] Which of the following three filters is used depends on the target: i) the bluer F475W (roughly SDSS g) is used for mostly spiral structures, ii) the F814W for bulges and iii) the F625W which is closely matched with SDSS r filter. [ 1 ] A range of software is used to calculate where the target's image is captured on the available ACS CCDs, using a coordinate offset within a 'circle of interest' to find the most useful coverage. [ 1 ] A different strategy for Green Pea systems uses a choice of four filters allotted using distance values so as to study the continuum structure. [ 1 ]
Among the 300 Zoogems, there are 74 candidates that are Pea galaxies. [ 1 ] The first Zoogems study to be published in May 2021 was "An Old Stellar Population or Diffuse Nebular Continuum Emission Discovered in Green Pea Galaxies" which concentrated on 9 of them. [ 7 ] In this study, Leonardo Clarke et al. examine the content of PGs to find out about the different ages of the stars and find that while the central star-forming clusters were up to 500 million years old, there are stars, possibly the host galaxy stars, which are older and are thought to be more than 1 billion years old. [ 7 ]
Pea galaxies have been studied as they are the only population that has hydrogen-ionizing radiation escaping in large amounts. Because of this, they are seen as analogs of the galaxies that reionized the universe at the earliest times. [ 7 ] Yet the substantial presence of old stars would not have been possible at the earliest stages of the first galaxies. The mix of old and new stars within Pea galaxies could create different gravitational conditions which might influence galactic winds and element retention. [ 7 ] These conclusions imply that Pea galaxies are not real analogs of the galaxies responsible for the Epoch of Reionisation. [ 7 ]
The first study detailing objects from Radio Galaxy Zoo was published by the Astrophysical Journal in December 2022. [ 8 ] "An Elusive Population of Massive Disk Galaxies Hosting Double-lobed Radio-loud AGNs" seeks to answer whether the galaxy morphology of radio-loud Active Galactic Nucleii and its hosts are solely ellipticals ("early-type"), or that some are spirals ("late type"). [ 8 ] Using images taken as part of Zoogems, they analyse a sample of radio galaxies which have extended double-lobed structures and see whether they can be associated with their disk-like optical objects. [ 8 ] They find 18 galaxies that can be identified as spiral that are likely to have genuine associations between the radio and optical counterparts. [ 8 ]
Zihao et al. assess whether these are chance alignments or that a host is too faint to be detected using probability statistics. This gave rise to the two confidence divisions of 'high' or 'low' with 18 having a high confidence and 14 a low confidence from the initial 32 galaxies. [ 8 ] Because of the high-resolution Zoogems images and the visibility of disk-like structures, the team find that galaxy morphology can no longer be a unique signpost of a galaxy's ability to generate large-scale radio jets. [ 8 ]
In October 2023, the magazine Sky and Telescope featured an article entitled "Unearthing galactic gems". [ 9 ] In it, the science journalist Madison Goldberg summarises the project and talks to Tom Brown from the Space Telescope Science Institute about the process of gap-fillers. Spare Hubble time had been used before with the 45 minute "snapshot programs" but some unscheduled time remained. Brown said: "It just seemed like a waste to be throwing that time on the floor. Just a handful of minutes here and there, but still, it adds up." [ 9 ] And so, the gap-filler project started using those small gaps in the timetable to take 11 minute exposures.
Bill Keel, project lead scientist, explained that unusual galaxies can help us understand the universe today. He described the ZooGems category of 'overlapping galaxy pairs'. He said: "What’s unusual there is not the galaxies themselves, but the fact that one sits neatly behind the other in telescopic images." [ 9 ] Samantha Brunker, a scientist studying Green Pea galaxies , said that the variety of unusual targets included in ZooGems is special. "If you’re going to paint a whole picture, you can’t leave out the weird things." [ 9 ]
NGC 1175 , nicknamed the 'Peanut galaxy' is a barred spiral galaxy, approximately 252 million light years away. This has a peculiar morphology with the inner regions being thicker in some than in others, which has caused a 'boxy' appearance reminding the astronomers of an unshelled peanut. [ 10 ]
NGC 2292 and NGC 2293 are two ellipticals , nicknamed the 'Greater Pumpkin', that have merged at about 120 million light years away. These interacting galaxies will eventually become a giant spiral, an event rare enough that there are only a few other examples in the Universe. [ 11 ] [ 12 ]
The VV-689 system, nicknamed the 'Angel Wing', is two galaxies merging. This interaction has left the resulting collision almost completely symmetrical (top of article). [ 13 ]
The HST image of CGCG 396-2 shows an uncommon multi-armed merger 520 million light years from earth. [ 14 ] [ 15 ]
Two spiral galaxies , SDSS J115331 and LEDA 2073461, over a billion light years away, appear to be colliding. The effect caused by line-of-sight is likely by chance as the two are not actually interacting (image right hand side). [ 6 ] [ 16 ] | https://en.wikipedia.org/wiki/Gems_of_the_Galaxy_Zoos |
Gemtuzumab ozogamicin , sold under the brand name Mylotarg , is an antibody-drug conjugate (a drug-linked monoclonal antibody ) that is used to treat acute myeloid leukemia (AML). [ 5 ] [ 7 ] [ 8 ]
The most common side effects include infection, febrile neutropenia, decreased appetite, hyperglycemia, mucositis, hypoxia, hemorrhage, increased transaminase, diarrhea, nausea, and hypotension. [ 9 ] However, the addition of gemtuzumab ozogamicin to standard chemotherapy regimens does not increase infection rates. [ 10 ]
In the United States, gemtuzumab ozogamicin is indicated for newly diagnosed CD33-positive acute myeloid leukemia (AML) for adults and children one month and older and for the treatment of relapsed or refractory CD33-positive AML in adults and children two years and older. [ 5 ] [ 9 ]
Gemtuzumab ozogamicin is a recombinant, humanized anti- CD33 monoclonal antibody (IgG4 κ antibody hP67.6) covalently attached to the cytotoxic antitumor antibiotic calicheamicin (N-acetyl-γ-calicheamicin) payload via a bifunctional linker (4-(4-acetylphenoxy)butanoic acid).
Calicheamicin (the payload ) is approximately 4,000 times more active than doxorubicin ), and since it also destroys the DNA of normal, healthy, cells, it cannot be used as a single agent to treat patients. However, by linking calicheamicin to a monoclonal antibody, scientists have optimized the features of both components, creating a class of targeted drugs called antibody-drug conjugates (ADC) or armed antibodies which selectively dispatch highly potent cytotoxic anticancer chemotherapies directly to cancer cells while, at the same time, leaving healthy tissue unaffected. [ 11 ]
CD33 is expressed in most leukemic blast cells but also in normal hematopoietic cells, the intensity diminishing with maturation of stem cells .
Gemtuzumab ozogamicin was created in a collaboration between Celltech and Wyeth that began in 1991. [ 12 ] [ 13 ] The same collaboration later produced inotuzumab ozogamicin . [ 14 ] Celltech was acquired by UCB in 2004 [ 15 ] and Wyeth was acquired by Pfizer in 2009. [ 16 ]
In the United States, gemtuzumab ozogamicin was approved under an accelerated-approval process by the FDA in 2000, for use in patients over the age of 60 with relapsed acute myelogenous leukemia (AML); or those who are not considered candidates for standard chemotherapy. [ 17 ] The accelerated approval was based on the surrogate endpoint of response rate . [ 18 ] It was the first antibody-drug conjugate to be approved. [ 19 ]
Within the first year after approval, the FDA required a black box warning be added to gemtuzumab packaging. The drug was noted to increase the risk of veno-occlusive disease in the absence of bone marrow transplantation . [ 20 ] Later the onset of VOD was shown to occur at increased frequency in gemtuzumab patients even following bone marrow transplantation. [ 21 ] The drug was discussed in a 2008 JAMA article, which criticized the inadequacy of postmarketing surveillance of biologic agents . [ 22 ]
A randomized Phase III comparative controlled trial (SWOG S0106) was initiated in 2004, by Wyeth in accordance with the FDA accelerated-approval process . The study was stopped on August 20, 2009, prior to completion due to worrisome outcomes. [ 23 ] Among the patients evaluated for early toxicity, fatal toxicity rate was significantly higher in the gemtuzumab combination therapy group vs the standard therapy group. Mortality was 5.7% with gemtuzumab and 1.4% without the agent (16/283 = 5.7% vs 4/281 = 1.4%; P = .01). [ 18 ]
In June 2010, Pfizer withdrew gemtuzumab ozogamicin from the market at the request of the US FDA. [ 24 ] [ 25 ] However, some other regulatory authorities did not agree with the FDA decision, with Japan's Pharmaceuticals and Medical Devices Agency stating in 2011 that the "risk-benefit balance of gemtuzumab ozogamicin has not changed from its state at the time of approval". [ 26 ]
In 2017, Pfizer reapplied for US and EU approval, based on a meta-analysis of prior trials and results of the ALFA-0701 clinical trial, an open-label Phase III trial in 280 older people with AML. [ 19 ] In September 2017, gemtuzumab ozogamicin was approved again for use in the United States [ 7 ] [ 27 ] and in the European Union. [ 6 ] | https://en.wikipedia.org/wiki/Gemtuzumab_ozogamicin |
The GenBank sequence database is an open access , annotated collection of all publicly available nucleotide sequences and their protein translations. It is produced and maintained by the National Center for Biotechnology Information (NCBI; a part of the National Institutes of Health in the United States ) as part of the International Nucleotide Sequence Database Collaboration (INSDC).
In October 2024, GenBank contained 34 trillion base pairs from over 4.7 billion nucleotide sequences and more than 580,000 formally described species . [ 2 ] [ 3 ]
The database started in 1982 by Walter Goad and Los Alamos National Laboratory . GenBank has become an important database for research in biological fields and has grown in recent years at an exponential rate by doubling roughly every 18 months. [ 4 ] [ 5 ] [ 3 ]
GenBank is built by direct submissions from individual laboratories, as well as from bulk submissions from large-scale sequencing centers.
Only original sequences can be submitted to GenBank. Direct submissions are made to GenBank using BankIt, which is a Web-based form, or the stand-alone submission program, table2asn. Upon receipt of a sequence submission, the GenBank staff examines the originality of the data and assigns an accession number to the sequence and performs quality assurance checks. The submissions are then released to the public database, where the entries are retrievable by Entrez or downloadable by FTP . Bulk submissions of Expressed Sequence Tag (EST), Sequence-tagged site (STS), Genome Survey Sequence (GSS), and High-Throughput Genome Sequence (HTGS) data are most often submitted by large-scale sequencing centers. The GenBank direct submissions group also processes complete microbial genome sequences. [ 6 ] [ 7 ]
Walter Goad of the Theoretical Biology and Biophysics Group at Los Alamos National Laboratory (LANL) and others established the Los Alamos Sequence Database in 1979, which culminated in 1982 with the creation of the public GenBank. [ 8 ] Funding was provided by the National Institutes of Health , the National Science Foundation , the Department of Energy , and the Department of Defense . LANL collaborated on GenBank with the firm Bolt, Beranek, and Newman , and by the end of 1983 more than 2,000 sequences were stored in it.
In the mid-1980s, the Intelligenetics bioinformatics company at Stanford University managed the GenBank project in collaboration with LANL. [ 9 ] As one of the earliest bioinformatics community projects on the Internet, the GenBank project started BIOSCI /Bionet news groups for promoting open access communications among bioscientists. During 1989 to 1992, the GenBank project transitioned to the newly created National Center for Biotechnology Information (NCBI) . [ 10 ]
The GenBank release notes for release 250.0 (June 2022) state that "from 1982 to the present, the number of bases in GenBank has doubled approximately every 18 months". [ 11 ] [ 12 ] As of 15 June 2022, GenBank release 250.0 has over 239 million loci , 1,39 trillion nucleotide bases, from 239 million reported sequences. [ 11 ]
The GenBank database includes additional data sets that are constructed mechanically from the main sequence data collection, and therefore are excluded from this count.
An analysis of Genbank and other services for the molecular identification of clinical blood culture isolates using 16S rRNA sequences [ 13 ] showed that such analyses were more discriminative when GenBank was combined with other services such as EzTaxon -e [ 14 ] and the BIBI [ 15 ] databases.
GenBank may contain sequences wrongly assigned to a particular species, because the initial identification of the organism was wrong. A recent study showed that 75% of mitochondrial Cytochrome c oxidase subunit I sequences were wrongly assigned to the fish Nemipterus mesoprion resulting from continued usage of sequences of initially misidentified individuals. [ 16 ] The authors provide recommendations how to avoid further distribution of publicly available sequences with incorrect scientific names.
Numerous published manuscripts have identified erroneous sequences on GenBank. [ 17 ] [ 18 ] [ 19 ] These are not only incorrect species assignments (which can have different causes) but also include chimeras and accession records with sequencing errors. A recent manuscript on the quality of all Cytochrome b records of birds further showed that 45% of the identified erroneous records lack a voucher specimen that prevents a reassessment of the species identification. [ 20 ]
Another problem is that sequence records are often submitted as anonymous sequences without species names (e.g. as " Pelomedusa sp. A CK-2014" because the species are either unknown or withheld for publication purposes. However, even after the species have been identified or published, these sequence records are not updated and thus may cause ongoing confusion. [ 21 ] | https://en.wikipedia.org/wiki/GenBank |
GenGIS [ 1 ] merges geographic, ecological and phylogenetic biodiversity data in a single interactive visualization and analysis environment. A key feature of GenGIS is the testing of geographic axes that can correspond to routes of migration or gradients that influence community similarity. [ 2 ] Data can also be explored using graphical summaries of data on a site-by-site basis, as 3D geophylogenies, or custom visualizations developed using a plugin framework. Standard statistical test such as linear regression and Mantel are provided, and the R statistical language can be accessed directly within GenGIS. Since its release, GenGIS has been used to investigate the phylogeography of viruses and bacteriophages, bacteria, and eukaryotes. | https://en.wikipedia.org/wiki/GenGIS |
Kristina Hanspers,
Nathan Salomonis,
Kam Dahlquist,
Scott Doniger,
Jeff Lawlor,
Alex Zambon,
Lynn Ferrante,
Karen Vranizan,
Steven C. Lawlor,
GenMAPP (Gene Map Annotator and Pathway Profiler) is a free, open-source bioinformatics software tool designed to visualize and analyze genomic data in the context of pathways ( metabolic , signaling ), connecting gene-level datasets to biological processes and disease. [ 1 ] First created in 2000, GenMAPP is developed by an open-source team based in an academic research laboratory. GenMAPP maintains databases of gene identifiers and collections of pathway maps in addition to visualization and analysis tools. Together with other public resources, GenMAPP aims to provide the research community with tools to gain insight into biology through the integration of data types ranging from genes to proteins to pathways to disease.
GenMAPP was first created in 2000 as a prototype software tool in the laboratory of Bruce Conklin at the J. David Gladstone Institutes in San Francisco and continues to be developed in the same non-profit, academic research environment. The first release version of GenMAPP 1.0 was available in 2002, supporting analysis of DNA microarray data from human , mouse , rat and yeast . In 2004, GenMAPP 2.0 was released, combining the previously accessory programs MAPPFinder and MAPPBuilder, and expanding support to additional species. GenMAPP 2.1 was released in 2006 with new visualization features and support for a total of eleven species.
GenMAPP was developed by biologists and is focused on pathway visualization for bench biologists. Unlike many other computational systems biology tools, GenMAPP is not designed for cell/systems modeling; it focuses on the immediate needs of bench biologists by enabling them to rapidly interpret genomic data with an intuitive, easy-to-use interface.
GenMAPP is implemented in Visual Basic 6.0 and is available as a stand-alone application for Microsoft Windows operating systems, including Boot Camp or Parallels Workstation on a Mac.
GenMAPP builds and maintains gene databases for a variety of key model organisms :
GenMAPP provides tools to create, edit and annotate biological pathway maps.
GenMAPP allows users to visualize and analyze their data in the context of pathway collections and the Gene Ontology . | https://en.wikipedia.org/wiki/GenMAPP |
GenX is a Chemours trademark name for a synthetic, short-chain organofluorine chemical compound, the ammonium salt of hexafluoropropylene oxide dimer acid (HFPO-DA). It can also be used more informally to refer to the group of related fluorochemicals that are used to produce GenX. [ 1 ] [ 2 ] DuPont began the commercial development of GenX in 2009 as a replacement for perfluorooctanoic acid ( PFOA , also known as C8), in response to legal action due to the health effects and ecotoxicity of PFOA. [ 3 ] [ 4 ] [ 5 ]
Although GenX was designed to be less persistent in the environment compared to PFOA, its effects may be equally harmful or even more detrimental than those of the chemical it was meant to replace. [ 6 ] [ 7 ]
GenX is one of many synthetic organofluorine compounds collectively known as per- and polyfluoroalkyl substances (PFASs).
The chemicals are used in products such as food packaging , paints, cleaning products, non-stick coatings, outdoor fabrics and firefighting foam . [ 8 ] The chemicals are manufactured by Chemours , a corporate spin-off of DuPont , in Fayetteville, North Carolina . [ 9 ]
GenX chemicals are used as replacements for PFOA for manufacturing fluoropolymers such as Teflon , [ 2 ] [ 10 ] the GenX chemicals serve as surfactants and processing aids in the fluoropolymer production process to lower the surface tension allowing the polymer particles to grow larger. The GenX chemicals are then removed from the final polymer by chemical treatment and heating. [ 11 ]
The manufacturing process combines two molecules of hexafluoropropylene oxide (HFPO) to form HFPO-DA. HFPO-DA is converted into its ammonium salt that is the official GenX compound. [ 3 ] [ 2 ]
The chemical process uses 2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoic acid (FRD-903) to generate ammonium 2,3,3,3-tetrafluoro-2-(heptafluoropropoxy)propanoate (FRD-902) and heptafluoropropyl 1,2,2,2-tetrafluoroethyl ether (E1). [ 12 ]
When GenX contacts water, it releases the ammonium group to become HFPO-DA. Because HFPO-DA is a strong acid, it deprotonates into its conjugate base, which can then be detected in the water. [ 3 ]
In North Carolina , the Chemours Fayetteville plant released GenX compounds into the Cape Fear River , which is a drinking water source for the Wilmington area. A documentary film, The Devil We Know ; a fictional dramatization, Dark Waters ; and a nonfiction memoir, Exposure: Poisoned Water, Corporate Greed, and One Lawyer's Twenty-Year Battle Against DuPont by Robert Bilott , subsequently publicized the discharges, leading to controversy over possible health effects. [ 13 ]
HFPO-DA was first reported to be in the Cape Fear River in 2012 [ 14 ] and an additional eleven polyfluoroalkyl substances (PFAS) were reported 2014. [ 15 ] These results were published as a formal paper in 2015. [ 16 ] The following year, North Carolina State University and the EPA jointly published a study demonstrating HFPO-DA and other PFAS were present in the Wilmington-area drinking water sourced from the Cape Fear river. [ 17 ]
In September 2017, the North Carolina Department of Environmental Quality (NCDEQ) ordered Chemours to halt discharges of all fluorinated compounds into the river. Following a chemical spill one month later, NCDEQ cited Chemours for violating provisions in its National Pollutant Discharge Elimination System wastewater discharge permit. [ 18 ] In November 2017, the Brunswick County Government filed a federal lawsuit alleging that DuPont failed to disclose research regarding potential risks from the chemical. [ 19 ]
In spring 2018, Cape Fear River Watch [ 20 ] sued Chemours for Clean Water Act violations and sued the NCDEQ for inaction. [ 21 ] After Cape Fear River Watch's suits were filed, NCDEQ filed a suit against Chemours, the result of all 3 lawsuits culminated in a consent order. [ 20 ] The order signed by all 3 parties requires Chemours drastically reduce PFAS containing water discharges and air emissions, as well as sampling and filtration for well owners with contaminated wells, among other requirements. All materials relative to status of consent order requirements must be published to a public website. One requirement under the order was for non-targeted analysis which found 257 "unknown" PFAS being released from Fayetteville Works, (aside from the 100 'known' PFAS which can be quantified. [ 22 ] Cape Fear River Watch published [ 20 ] that their research of the NC DEQ permit file [ 23 ] indicates that the first PFAS byproducts were likely released from Fayetteville Works in 1976 with the production of Nafion which uses HFPO in production (otherwise known as GenX) and creates byproducts termed Nafion Byproducts 1 through 5, some of which have been found in the blood of Cape Fear area residents. [ 24 ]
In 2020 Michigan adopted drinking water standards for 5 previously unregulated PFAS compounds including HFPO-DA which has a maximum contaminant level (MCL) of 370 parts per trillion (ppt). Two previously regulated PFAS compounds PFOA and PFOS had their acceptable limits lowered to 8 ppt and 16 ppt respectively. [ 25 ] [ 26 ]
In 2022 Virginia's Roanoke River had become contaminated by GenX at levels reported to be 1.3 million parts per trillion. [ 27 ]
GenX has been shown to cause a variety of adverse health effects. While it was originally marketed as a safer alternative to legacy PFAS, research suggests that GenX poses significant health risks similar to those associated with its predecessor. [ 28 ] [ 6 ] [ 7 ] [ 8 ] [ 29 ]
Studies have demonstrated that the liver is especially vulnerable to GenX exposure. [ 29 ] [ 30 ] [ 31 ] Animal research has shown that even low doses of GenX can cause liver enlargement and damage. [ 30 ] [ 32 ] Similarly, the kidneys are also sensitive to GenX, with chronic exposure leading to renal toxicity. [ 32 ] These effects highlight the potential dangers of prolonged exposure to even small amounts of the chemical. [ 33 ] [ 30 ]
There is increasing concern about the carcinogenic potential of GenX. Research in animal models has linked exposure to various cancers, including liver, pancreatic, and testicular cancers. [ 29 ] [ 30 ] [ 34 ] Although data on humans are limited, the results from these studies have prompted further investigation into the possible cancer risks posed by GenX. [ 30 ] [ 34 ]
Two 2023 studies have identified potential neurotoxic effects of GenX, particularly during critical developmental windows. [ 35 ] [ 36 ] Pre-differentiation exposure of human dopaminergic-like neurons (SH-SY5Y cells) to low-dose GenX (0.4 and 4 μg/L) resulted in persistent alterations in neuronal characteristics. The study reported significant changes in nuclear morphology, chromatin arrangement, and increased expression of the repressive marker H3K27me3, which is associated with neurodegeneration. [ 36 ]
These changes were accompanied by disruptions in mitochondrial function and an increase in intracellular calcium levels, which are critical markers of neuronal health. Notably, GenX exposure led to altered expression of α-synuclein, a protein closely linked to the development of Parkinson's disease. The findings suggest that developmental exposure to GenX may pose a long-term risk for neurodegenerative disorders, particularly Parkinson's disease, due to its impact on key neuronal processes. [ 36 ]
Recent research has also underscored the potential for GenX to disrupt glucose and lipid metabolism during critical developmental periods. A 2021 study published in Environment International investigated the effects of prenatal exposure to GenX in Sprague-Dawley rats, revealing significant maternal and neonatal adverse outcomes, such as increased maternal liver weight, altered lipid profiles, and reduced glycogen accumulation in neonatal livers, resulting in hypoglycemia. Additionally, neonatal mortality and lower birth weights were observed at higher doses of GenX . [ 37 ]
A 2024 study in Science of the Total Environment expanded upon these findings in mice, demonstrating that gestational exposure to GenX led to increased liver weight, elevated liver enzyme levels (e.g., ALT and AST), and decreased glycogen storage capacity in the liver. Disruptions in gut flora and the intestinal mucosal barrier were also noted, further linking GenX exposure to hepatotoxicity. [ 31 ]
Both studies revealed significant alterations in gene expression, particularly in pathways regulating glucose and lipid metabolism. Genes such as CYP4A14, Sult2a1, and Igfbp1 were upregulated, which may have long-term implications for metabolic health. These findings suggest that gestational GenX exposure could trigger metabolic disorders and liver toxicity, posing potential health risks for populations exposed to GenX through contaminated water sources . [ 31 ] [ 37 ]
Studies have demonstrated that exposure to GenX, a replacement for long-chain PFAS chemicals, can lead to complex health effects. GenX has been linked to alterations in immune responses and metabolic processes, as observed in both human and animal studies. For instance, in a study using Monodelphis domestica , GenX exposure upregulated genes associated with inflammation and fatty acid transport. [ 38 ] Another study on mice showed that GenX suppressed innate immune responses to inhaled carbon black nanoparticles, while simultaneously promoting lung cell proliferation, including macrophages and epithelial cells. [ 39 ] These findings suggest that GenX may have immunosuppressive effects, potentially increasing susceptibility to respiratory agents while encouraging cellular growth in the lungs, raising concerns about respiratory health risks.
This research highlights the potential health implications of GenX exposure, particularly its impact on immune system function and cell proliferation, which may contribute to both immune suppression and adverse health outcomes like inflammation or respiratory diseases. These findings raise concerns about the long-term impact on human health, especially in vulnerable populations. [ 33 ]
In June 2022 the U.S. Environmental Protection Agency (EPA) published drinking water health advisories , which are non-regulatory technical documents, for GenX and PFBS . [ 40 ] [ 41 ] The lifetime health advisories and health effects support documents assist federal, state, tribal, and local officials and managers of drinking water systems in protecting public health when these chemicals are present in drinking water. EPA also listed recommended steps that consumers may take to reduce possible exposure to GenX and other PFAS chemicals. [ 42 ]
In April 2024 EPA published final drinking water standards for GenX and five other PFAS compounds, pursuant to the Safe Drinking Water Act . The standards, enforced by EPA and state agencies, require all public water systems in the U.S. to monitor for GenX and treat their water, if necessary to meet the 10 ppt standard. [ 43 ] EPA also announced the availability of grant funds to assist small and disadvantaged communities in testing for and treating PFAS contamination in their water systems. [ 44 ] | https://en.wikipedia.org/wiki/GenX |
Genchi genbutsu ( 現地現物 ) literally translates "real location, real thing” (meaning "the situation onsite") and it is a key principle of the Toyota Production System . The principle is sometimes referred to as "go and see." It suggests that in order to truly understand a situation one needs to observe what is happening at the site where work actually takes place: the gemba (現場). One definition is that it is "collecting facts and data at the actual site of the work or problem." [ 1 ]
Taiichi Ohno , creator of the Toyota Production System is credited, perhaps apocryphally, with taking new graduates to the shop floor and drawing a chalk circle on the floor. The graduate would be told to stand in the circle, observe and note what he saw. When Ohno returned he would check; if the graduate had not seen enough he would be asked to keep observing. Ohno was trying to imprint upon his future engineers that the only way to truly understand what happens on the shop floor was to go there. It was where the value was added and waste could be observed.
Genchi genbutsu is, therefore, a key approach in problem-solving, saying that if the problem exists on the shop floor, then it needs to be understood and solved at the shop floor.
Genchi genbutsu is also called Gemba attitude. Gemba is the Japanese term for "the place" (meaning "the place where it actually happens"). Since real value is created at the shop floor in manufacturing, it espouses the philosophy that this is where managers need to spend their time.
Genchi genbutsu is sometimes referred to as "Getcha boots on" (and go out and see what is happening) in English due to its similar cadence and meaning. It has been compared to Peters and Waterman's idea of " Management By Wandering Around ". [ 2 ] This concept quickly became so universal that new managers instinctively knew that they had to "walk around" to achieve high effectiveness levels. Whilst these ideas, with their associated lists of how-tos, are probably good ideas they may miss the essential nature of Genchi Genbutsu which is less to "visit" and more to "know" by being there. Toyota has high levels of management presence on the production line whose role is to "know" and to constantly improve. | https://en.wikipedia.org/wiki/Genchi_Genbutsu |
Gender digital divide refers to the inequalities in access to, use of, and participation in digital technologies and the technology sector based on gender. [ 1 ] [ 2 ] It encompasses disparities in digital skills, internet access, representation in computing and STEM fields, and exposure to gender-biased technologies such as artificial intelligence and voice assistants [ 3 ] . The divide is shaped by broader socio-economic, cultural, and educational factors and is more pronounced among women and gender minorities in developing countries, rural areas, and lower-income populations. Despite global efforts to close this gap, significant challenges remain, including patriarchal norms, safety concerns, affordability issues, and limited access to digital education. [ 4 ] Addressing the gender digital divide is considered essential for achieving broader gender equality, inclusive economic development, and equitable digital transformation.
Education systems are increasingly trying to ensure equitable, inclusive, and high-quality digital skills, education, and training. Though digital skills open pathways to further learning and skills development, women and girls are still being left behind in digital skills education. Globally, digital skills gender gaps are growing, despite at least a decade of national and international efforts to close them. [ 1 ] The economic and political interests of its indicators have also been questioned. [ 5 ]
Women are less likely to know how to operate a smartphone , navigate the internet , use social media and understand how to safeguard information in digital mediums (abilities that underlie life and work tasks and are relevant to people of all ages) worldwide. There is a gap from the lowest skill proficiency levels, such as using apps on a mobile phone, to the most advanced skills like coding computer software to support the analysis of large data sets. [ 1 ]
Women in numerous countries are 25% less likely than men to know how to leverage ICT for basic purposes, such as using simple arithmetic formulas in a spreadsheet. [ 6 ] UNESCO estimates that men are around four times more likely than women to have advanced ICT skills such as the ability to programme computers. [ 1 ] Across G20 countries 7% of ICT patents are generated by women, [ 7 ] and the global average is at 2%. [ 8 ] Recruiters for technology companies in Silicon Valley estimate that the applicant pool for technical jobs in artificial intelligence (AI) and data science is often less than 1% female. [ 9 ] To highlight this difference, in 2009 there were 2.5 million college-educated women working in STEM compared to 6.7 million men. The total workforce at the time was 49% women and 51% men which highlights the evident gap. [ 3 ]
While the gender gap in digital skills is evident across regional boundaries and income levels, it is more severe for women who are older, less educated, poor, or living in rural areas and developing countries. Making women much less likely to graduate in any field of STEM compared to their male counterpart. [ 10 ] Digital skills gap intersects with issues of poverty and educational access. [ 1 ]
Women and girls who live in patriarchal cultures may struggle more than those who do not to access public ICT facilities. Due to the social challenges these cultures create as well reinforces the struggles and creates an overlap. They may struggle to access these facilities due to unsafe roads, limits on their freedom of movement, or because the facilities themselves are considered unsuitable for women. They may also lack financial freedom creating a large barrier to purchase any form of technology or have any type of internet connection. [ 1 ] If they do have access to technology of the internet, it is usually controlled by the men in their households and limit their content selection to content focused on women's appearances, dating, or the role of motherhood. [ 1 ] Fears concerning safety and harassment (both online and offline) also inhibit many women and girls from benefiting from or even wanting to use ICTs. [ 11 ]
In many contexts, women and girls face concerns of physical violence if they own or borrow digital devices, which in some cases leads to their using the devices in secret, making them more vulnerable to online threats and making it difficult to gain digital skills. [ 12 ]
The stereotype of technology as a male domain is common in many contexts and affect girls' confidence in their digital skills from a young age. In OECD countries, 0.5% of girls aspire towards ICT-related careers at age 15, versus 5% of boys. [ 13 ] This was not always the case. Early decades of computing saw a much larger presence of women. Acting as programmers during World War II , they held highly valued positions. [ 14 ] Women's contributions, however, have been largely obscured due to how the history is told. Focusing on the infrastructure and hardware of digital technologies development has placed men at the forefront of its history. [ 15 ] Post war computer manufacturers sought to commercialize the machines and opened up a new form of labor market. This post war market utilized discriminatory criteria measures that women were no longer able to meet due to societal, educational, and labor expectations. [ 14 ] Managers of early technology firms allowed women well-suited for programming because of stereotypes characterizing them as meticulous and good at following step-by-step directions. Women, including many women of color, flocked to jobs in the computer industry because it was seen as more meritocratic than other fields. [ 16 ] As computers became integrated into people's daily life, it was noticed that programmers had influence. Consequently, women were pushed out and the field became more male-dominated. [ 1 ]
In developed countries like Canada, the digital divide can exist due to factors of lacking digital literacy which prevents individuals from understanding how to use and what to do with technology. [ citation needed ] Other research on the gender divide in Canada has found contrasting results, showing a potential suggestion to the closing of the gap in more developed countries over the last couple years in relation to access to the internet and technology as a whole. However, the amount of activity online is found to be higher for men than women. [ 17 ] When looking at issues regarding professional sectors the IT sector in Canada remains male-dominated. The presence of women in field with technology has increased significantly but in specific high-paying technological fields like computer science it is declining. [ 18 ]
Due to the declining price of connectivity and hardware, skills deficits have exceeded barriers of access as the primary contributor to the gender digital divide. For years, the divide was assumed to be symptomatic of technical challenges. It was thought that women would catch up with men when the world had cheaper devices and lower connectivity prices, due to the limited purchasing power and financial independence of women compared with men in countries with a patriarchal culture. [ 1 ] The cost of ICT access remains an issue and is surpassed by educational gaps. For example, the gender gap in internet penetration is around 17% in the Arab States and the Asia and Pacific region, [ 19 ] whereas the gender gap in ICT skills is as high as 25% in some Asian and Middle Eastern countries. [ 20 ] In sub-Saharan Africa (SSA), the Internet penetration rate in 2019 was 33.8 percent for men and 22.6 percent for women. The Internet user gender gap was 20.7 percent in 2013 and up to 37 percent in 2019. [ 21 ] [ 22 ] The Internet penetration rate in 2019 was 33.8 percent for men and 22.6 percent for women. [ 23 ] [ 22 ]
Other research has shown more factors that contribute to access of the internet. In the United States, it was found that individuals who has lower than high school education and made less than $30k a year has the lowest access to the internet. They found that the most consistent results form various research is that individuals with the lowest education and lowest income had the lowest access to the internet. [ 17 ] When looking at differences with gender, inconsistent results were found. When large divides were found between men and women' access to the internet, socioeconomic factors were the cause. Overall, the gender divide has found to be largely insignificant in countries like the United States and in Canada. [ 17 ]
SSA has one of the widest mobile gender gaps in the world where over 74 million women are not connected. [ 22 ] The gender gap in mobile ownership was 13 percent, a reduction from 14 percent in 2018; however, in low- and middle-income countries it remains substantial with fewer women than men accessing the Internet on a mobile device. [ 22 ] Furthermore, women are less likely to use digital services or mobile Internet and tend to use different mobile services than men. [ 24 ] [ 22 ]
Many people have access to affordable devices and broadband networks , but do not have the requisite skills to take advantage of this technology to improve their lives. [ 1 ] In Brazil , lack of skills (rather than cost of access) was found to be the primary reason low-income groups are not using the internet. [ 25 ] In India , where lack of skills and lack of need for the internet were the primary limiting factors across all income groups.
Lack of understanding, interest or time is a bigger issue than affordability or availability as the reason for not using the internet. [ 1 ] Even though skills deficits prevent both men and women from using digital technologies, they tend to be more severe for women. In a study conducted across 10 low- and middle-income countries, women were 1.6 times more likely than men to report lack of skills as a barrier to internet use. [ 26 ] Women are also more likely to report that they do not see a reason to access and use ICT. [ 27 ] Interest and perception of need are related to skills, as people who have little experience with or understanding of ICTs tend to underestimate their benefits and utility. [ 1 ]
In many societies, gender equality does not translate into digital realms and professions. The persistence of growing digital skills gender gaps, even in countries that rank at the top of the World Economic Forum's global gender gap index (reflecting strong gender equality), demonstrates a need for interventions that cultivate the digital skills of women and girls. [ 1 ]
For most countries, the primary barriers for women regarding access to digital technology are cost/unaffordability followed by illiteracy and lack of digital skills. For instance, in Africa 65.4 percent of people aged 15 and older are illiterate, compared to the global average rate of 86.4 percent. [ 28 ] [ 22 ]
The COVID-19 pandemic and the measures taken by governments on social distancing and mobility restrictions have contributed to boosting the use of digital technology to bridge some of the physical access gaps. [ 22 ] However, the rapid proliferation of digital tools and services stands in stark contrast to the many systemic and structural barriers to technology access and adoption that many people in rural Africa still face. [ 22 ] Gender inequalities, intersecting with and compounded by other social differences such as class, race, age, (dis)ability, etc., shape the extent to which different rural women and men are able not only to access but also use and benefit from these new technologies and ways of delivering information and services. [ 22 ]
Beside the potential of digital tools and applications, the COVID-19 crisis has evidenced the existing digital divide and especially the gender gap. [ 22 ] It is estimated that 3.6 billion individuals are not connected to the Internet across the globe, including 900 million in Africa. [ 22 ] Only 27 percent of women in Africa have access to the Internet and only 15 percent of them can afford to use it. [ 29 ] [ 22 ]
According to a study by FAO, gender-responsive digitalization in COVID-19 response and beyond could include: [ 22 ]
Helping women and girls develop digital skills means stronger women, stronger families, stronger communities, stronger economies and better technology. [ 1 ] Digital skills are recognized to be essential life skills required for full participation in society . The main benefits for acquiring digital skills are they: [ 1 ]
Digitalization can potentially pave the way for improving the efficiency and functioning of food systems, which in turn can have positive impacts on the livelihoods of women and men farmers and agripreneurs, for example, through the creation of digital job opportunities for young women and men in rural areas. [ 22 ]
[ 32 ] The digital divide has begun at earlier ages as young adults have lived out their childhoods with personal computers. This has made intervention to prevent further gender divides in the digital realm needed in more early education. Increasing girls' and women's digital skills involves early, varied and sustained exposure to digital technologies. [ 33 ] Interventions should not be limited to formal education settings, they should reflect a multifaceted approach, enabling women and girls to acquire skills in a variety of formal and informal contexts (at home, in school , in their communities and in the workplace ). [ 1 ] The digital divide cuts across age groups, therefore solutions need to assume a lifelong learning orientation. The technological changes adds impetus to the 'across life' perspective, as skills learned today will not necessarily be relevant in 5 or 10 years. Digital skills require regular updating, to prevent women and girls fall further behind. [ 1 ]
Women and girls digital skills development are strengthened by: [ 1 ]
According to the Food and Agriculture Organization (FAO), there are seven success factors to empowering rural women through ICTs: [ 22 ]
The regulatory role of governments (at local, national, regional, and international levels) is crucial in addressing infrastructural barriers, harmonizing and making the regulatory environment inclusive and gender-responsive, and in protecting all stakeholders from fraud and crime. [ 22 ]
Initiatives targeted at boosting women's representation in the technology industry are essential to closing the digital skills gender divide. Mentorship programs , networking chances , and scholarships for women seeking jobs in technology are examples of such initiatives. These efforts can help create more inclusive workplaces that respect diversity and promote creativity by boosting the presence of women in the technology industry. [ 34 ]
Overall, initiatives targeted at boosting women's representation in the technology industry are essential to closing the digital skills gender divide. We can build more inclusive and innovative environments that help everyone if we assist women in technology.
Men continue to dominate the technology space, and the disparity serves to perpetuate gender inequalities, as unrecognized bias is replicated and built into algorithms and artificial intelligence (AI) . [ 1 ]
Limited participation of women and girls in the technology sector can stem outward replicating existing gender biases and creating new ones. Women's participation in the technology sector is constrained by unequal digital skills education and training. Learning and confidence gaps that arise as early as primary school amplify as girls move through education, therefore by the time they reach higher education only a fraction pursue advanced-level studies in computer science and related information and communication technology (ICT) fields. [ 33 ] Divides grow greater in the transition from education to work. The International Telecommunication Union (ITU) estimates that only 6% of professional software developers are women. [ 38 ]
Technologies generated by male-dominated teams and companies often reflect gender biases. Establishing balance between men and women in the technology sector will help lay foundations for the creation of technology products that better reflect and ultimately accommodate the rich diversity of human societies. [ 1 ] For instance AI, which is a branch of the technology sector that wields influence over people's lives. [ 1 ] Today, AI curates information shown by internet search engines , determines medical treatments, makes loan decisions, ranks job applications, translates languages, places ads, recommends prison sentences, influences parole decisions, calibrates lobbying and campaigning efforts, intuits tastes and preferences, and decides who qualifies for insurance, among other tasks. Despite the growing influence of this technology, women make up just 12% of AI researchers. [ 38 ] Closing the gender divide begins with establishing more inclusive and gender-equal digital skills education and training. [ 1 ]
Digital assistants encompass a range of internet-connected technologies that support users in various ways. When interacting with digital assistants, users are not restricted to a narrow range of input commands, but are encouraged to make queries using whichever inputs seem most appropriate or natural, whether they are typed or spoken. Digital assistants seek to enable and sustain more human-like interactions with technology. Digital assistants can include: voice assistants, chatbots , and virtual agents. [ 1 ]
Voice assistants have become central to technology platforms and, in many countries, to day-to-day life. Between 2008 and 2018, the frequency of voice-based internet search queries increased 35 times and account for close to one fifth of mobile internet searches (a figure that is projected to increase to 50% by 2020). [ 39 ] Voice assistants now manage upwards of 1 billion tasks per month, from the mundane (changing a song) to the essential (contacting emergency services). [ 40 ]
Today, most leading voice assistants are exclusively female or female by default, both in name and in sound of voice. Amazon has Alexa (named for the ancient library in Alexandria), [ 41 ] Microsoft has Cortana (named for a synthetic intelligence in the video game Halo that projects itself as a sensuous unclothed woman), [ 42 ] and Apple has Siri (coined by the Norwegian co-creator of the iPhone 4S and meaning 'beautiful woman who leads you to victory' in Norse). [ 43 ] While Google's voice assistant is simply Google Assistant and sometimes referred to as Google Home, its voice is female.
The trend to feminize assistants occurs in a context in which there is a growing gender imbalance in technology companies, such that men commonly represent two thirds to three quarters of a firm's total workforce. [ 19 ] Companies like Amazon and Apple have cited academic work demonstrating that people prefer a female voice to a male voice, justifying the decision to make voice assistants female. Further research shows that consumers strongly dislike voice assistants without clear gender markers. [ 44 ] Gender bias is thus "hard-coded" into technology. Companies often cite research showing that customers want their digital assistants to sound like women, justifying the choice with the profit motive . [ 1 ] However, research on the topic is mixed, with studies showing that in some contexts male choices may be preferred. [ 1 ] For example, BMW was forced to recall a female-voiced navigation system on its 5 Series cars in the late 1990s after being flooded with calls from German men who reportedly " refused to take directions from a woman ". [ 45 ]
Researchers who specialize in human–computer interaction have recognized that both men and women tend to characterize female voices as more helpful. The perception may have roots in traditional social norms around women as nurturers (mothers often take on – willingly or not – significantly more care than fathers) and other socially constructed gender biases that predate the digital era. [ 1 ]
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 ( license statement/permission ). Text taken from Gender-responsive digitalization: A critical component of the COVID-19 response in Africa, FAO, FAO.
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 IGO. Text taken from I'd blush if I could: closing gender divides in digital skills through education , UNESCO, EQUALS Skills Coalition, UNESCO. UNESCO. | https://en.wikipedia.org/wiki/Gender_digital_divide |
In gene-activated matrix technology (GAM), cytokines and growth factors could be delivered not as recombinant proteins but as plasmid genes. [ 1 ] GAM is one of the tissue engineering approaches to wound healing. Following gene delivery , the recombinant cytokine could be expressed in situ by endogenous would healing cells – in small amounts but for a prolonged period of time – leading to reproducible tissue regeneration. The matrix can be modified by incorporating a viral vector, mRNA or DNA bound to a delivery system, or a naked plasmid. [ 2 ]
This article about biological engineering is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Gene-activated_matrix |
The gene-centered view of evolution , gene's eye view , gene selection theory , or selfish gene theory holds that adaptive evolution occurs through the differential survival of competing genes , increasing the allele frequency of those alleles whose phenotypic trait effects successfully promote their own propagation. [ 1 ] [ 2 ] [ 3 ] The proponents of this viewpoint argue that, since heritable information is passed from generation to generation almost exclusively by DNA , natural selection and evolution are best considered from the perspective of genes.
Proponents of the gene-centered viewpoint argue that it permits understanding of diverse phenomena such as altruism and intragenomic conflict that are otherwise difficult to explain from an organism-centered viewpoint. [ 4 ] [ 5 ]
The gene-centered view of evolution is a synthesis of the theory of evolution by natural selection, the particulate inheritance theory , and the rejection of transmission of acquired characters . [ 6 ] [ 7 ] It states that those alleles whose phenotypic effects successfully promote their own propagation will be favorably selected relative to their competitor alleles within the population. This process produces adaptations for the benefit of alleles that promote the reproductive success of the organism , or of other organisms containing the same allele ( kin altruism and green-beard effects ), or even its own propagation relative to the other genes within the same organism ( selfish genes and intragenomic conflict).
The gene-centered view of evolution is a model for the evolution of social characteristics such as selfishness and altruism, with gene defined as "not just one single physical bit of DNA [but] all replicas of a particular bit of DNA distributed throughout the world". [ verify ]
The formulation of the central dogma of molecular biology was summarized by Maynard Smith :
If the central dogma is true, and if it is also true that nucleic acids are the only means whereby information is transmitted between generations, this has crucial implications for evolution. It would imply that all evolutionary novelty requires changes in nucleic acids, and that these changes – mutations – are essentially accidental and non-adaptive in nature. Changes elsewhere – in the egg cytoplasm, in materials transmitted through the placenta, in the mother's milk – might alter the development of the child, but, unless the changes were in nucleic acids, they would have no long-term evolutionary effects.
The rejection of the inheritance of acquired characters, combined with Ronald Fisher the statistician , giving the subject a mathematical footing, and showing how Mendelian genetics was compatible with natural selection in his 1930 book The Genetical Theory of Natural Selection . [ 9 ] J. B. S. Haldane , and Sewall Wright , paved the way to the formulation of the selfish-gene theory. [ clarification needed ] For cases where environment can influence heredity, see epigenetics . [ clarification needed ]
The view of the gene as the unit of selection was developed mainly in the works of Richard Dawkins , [ 10 ] [ 11 ] W. D. Hamilton , [ 12 ] [ 13 ] [ 14 ] Colin Pittendrigh [ 15 ] and George C. Williams . [ 16 ] It was popularized by Dawkins in his book The Selfish Gene (1976). [ 1 ]
According to Williams' 1966 book Adaptation and Natural Selection ,
[t]he essence of the genetical theory of natural selection is a statistical bias in the relative rates of survival of alternatives (genes, individuals, etc.). The effectiveness of such bias in producing adaptation is contingent on the maintenance of certain quantitative relationships among the operative factors. One necessary condition is that the selected entity must have a high degree of permanence and a low rate of endogenous change, relative to the degree of bias (differences in selection coefficients).
Williams argued that "[t]he natural selection of phenotypes cannot in itself produce cumulative change, because phenotypes are extremely temporary manifestations." Each phenotype is the unique product of the interaction between genome and environment. It does not matter how fit and fertile a phenotype is, it will eventually be destroyed and will never be duplicated.
Since 1954, it has been known that DNA is the main physical substrate to genetic information, and it is capable of high-fidelity replication through many generations. So, a particular gene coded in a nucleobase sequence of a lineage of replicated DNA molecules can have a high permanence and a low rate of endogenous change. [ 17 ]
In normal sexual reproduction, an entire genome is the unique combination of father's and mother's chromosomes produced at the moment of fertilization. It is generally destroyed with its organism, because " meiosis and recombination destroy genotypes as surely as death." [ 16 ] Only half of it is transmitted to each descendant due to independent segregation .
And the high prevalence of horizontal gene transfer in bacteria and archaea means that genomic combinations of these asexually reproducing groups are also transient in evolutionary time: "The traditional view, that prokaryotic evolution can be understood primarily in terms of clonal divergence and periodic selection, must be augmented to embrace gene exchange as a creative force." [ 18 ] [ 19 ]
The gene as an informational entity persists for an evolutionarily significant span of time through a lineage of many physical copies. [ 2 ] [ 20 ]
In his book River out of Eden , Dawkins coins the phrase God's utility function to explain his view on genes as units of selection. He uses this phrase as a synonym of the " meaning of life " or the "purpose of life". By rephrasing the word purpose in terms of what economists call a utility function , meaning "that which is maximized", Dawkins attempts to reverse-engineer the purpose in the mind of the Divine Engineer of Nature, or the utility function of god . Finally, Dawkins argues that it is a mistake to assume that an ecosystem or a species as a whole exists for a purpose. [ 21 ] [ note 1 ] He writes that it is incorrect to suppose that individual organisms lead a meaningful life either; in nature, only genes have a utility function – to perpetuate their own existence with indifference to great sufferings inflicted upon the organisms they build, exploit and discard. [ note 1 ]
Genes are usually packed together inside a genome, which is itself contained inside an organism. Genes group together into genomes because "genetic replication makes use of energy and substrates that are supplied by the metabolic economy in much greater quantities than would be possible without a genetic division of labour." [ 23 ] They build vehicles to promote their mutual interests of jumping into the next generation of vehicles. As Dawkins puts it, organisms are the " survival machines " of genes. [ 1 ]
The phenotypic effect of a particular gene is contingent on its environment, including the fellow genes constituting with it the total genome. A gene never has a fixed effect, so how is it possible to speak of a gene for long legs? It is because of the phenotypic differences between alleles. One may say that one allele, all other things being equal or varying within certain limits, causes greater legs than its alternative. This difference enables the scrutiny of natural selection.
"A gene can have multiple phenotypic effects, each of which may be of positive, negative or neutral value. It is the net selective value of a gene's phenotypic effect that determines the fate of the gene." [ 24 ] For instance, a gene can cause its bearer to have greater reproductive success at a young age, but also cause a greater likelihood of death at a later age. If the benefit outweighs the harm, averaged out over the individuals and environments in which the gene happens to occur, then phenotypes containing the gene will generally be positively selected and thus the abundance of that gene in the population will increase.
Even so, it becomes necessary to model the genes in combination with their vehicle as well as in combination with the vehicle's environment.
The selfish-gene theory of natural selection can be restated as follows: [ 24 ]
Genes do not present themselves naked to the scrutiny of natural selection, instead they present their phenotypic effects. [...] Differences in genes give rise to differences in these phenotypic effects. Natural selection acts on the phenotypic differences and thereby on genes. Thus genes come to be represented in successive generations in proportion to the selective value of their phenotypic effects.
The result is that "the prevalent genes in a sexual population must be those that, as a mean condition, through a large number of genotypes in a large number of situations, have had the most favourable phenotypic effects for their own replication." [ 25 ] In other words, we expect selfish genes ("selfish" meaning that it promotes its own survival without necessarily promoting the survival of the organism, group or even species). This theory implies that adaptations are the phenotypic effects of genes to maximize their representation in future generations. [ note 1 ] An adaptation is maintained by selection if it promotes genetic survival directly, or else some subordinate goal that ultimately contributes to successful reproduction.
The gene is a unit of hereditary information that exists in many physical copies in the world, and which particular physical copy will be replicated and originate new copies does not matter from the gene's point of view. [ 20 ] A selfish gene could be favored by selection by producing altruism among organisms containing it. The idea is summarized as follows:
If a gene copy confers a benefit B on another vehicle at cost C to its own vehicle, its costly action is strategically beneficial if pB > C , where p is the probability that a copy of the gene is present in the vehicle that benefits. Actions with substantial costs therefore require significant values of p . Two kinds of factors ensure high values of p : relatedness (kinship) and recognition (green beards).
A gene in a somatic cell of an individual may forgo replication to promote the transmission of its copies in the germ line cells. [ note 1 ] It ensures the high value of p = 1 due to their constant contact and their common origin from the zygote .
The kin selection theory predicts that a gene may promote the recognition of kinship by historical continuity: a mammalian mother learns to identify her own offspring in the act of giving birth; a male preferentially directs resources to the offspring of mothers with whom he has copulated; the other chicks in a nest are siblings; and so on. The expected altruism between kin is calibrated by the value of p , also known as the coefficient of relatedness . For instance, an individual has a p = 1/2 in relation to his brother, and p = 1/8 to his cousin, so we would expect, ceteris paribus , greater altruism among brothers than among cousins. In this vein, geneticist J. B. S. Haldane famously joked, "Would I lay down my life to save my brother? No, but I would to save two brothers or eight cousins." [ 26 ] However, examining the human propensity for altruism, kin selection theory seems incapable of explaining cross-familiar, cross-racial and even cross-species acts of kindness, to which Richard Dawkins wrote:
Lay critics frequently bring up some apparently maladaptive feature of modern human behaviour—adoption, say, or contraception [...] The question, about the adaptive significance of behaviour in an artificial world, should never have been put [...] A useful analogy here is one that I heard from R. D. Alexander. Moths fly into candle flames, and this does nothing to help their inclusive fitness [...] We asked ‘Why do moths fly into candle flames?’ and were puzzled. If we had characterized the behaviour differently and asked ‘Why do moths maintain a fixed angle to light rays (a habit which incidentally causes them to spiral into the light source if the rays happen not to be parallel)?’, we should not have been so puzzled.
Green-beard effects gained their name from a thought-experiment first presented by Bill Hamilton [ 27 ] and then popularized and given its current name by Richard Dawkins who considered the possibility of a gene that caused its possessors to develop a green beard and to be nice to other green-bearded individuals. Since then, "green-beard effect" has come to refer to forms of genetic self-recognition in which a gene in one individual might direct benefits to other individuals that possess the gene. [ note 1 ] Such genes would be especially selfish , benefiting themselves regardless of the fates of their vehicles. Since then, green-beard genes have been discovered in nature, such as Gp-9 in fire ants ( Solenopsis invicta ), [ 28 ] [ 29 ] csA in social amoeba ( Dictyostelium discoideum ), [ 30 ] and FLO1 in budding yeast ( Saccharomyces cerevisiae ). [ 31 ]
As genes are capable of producing individual altruism, they are capable of producing conflict among genes inside the genome of one individual. This phenomenon is called intragenomic conflict and arises when one gene promotes its own replication in detriment to other genes in the genome. The classic example is segregation distorter genes that cheat during meiosis or gametogenesis and end up in more than half of the functional gametes . These genes can persist in a population even when their transmission results in reduced fertility . Egbert Leigh compared the genome to "a parliament of genes: each acts in its own self-interest, but if its acts hurt the others, they will combine together to suppress it" to explain the relative low occurrence of intragenomic conflict. [ 32 ] [ note 1 ]
The Price equation is a covariance equation that is a mathematical description of evolution and natural selection. The Price equation was derived by George R. Price , working to rederive W. D. Hamilton's work on kin selection.
Besides Richard Dawkins and George C. Williams, other biologists and philosophers have expanded and refined the selfish-gene theory, such as John Maynard Smith , George R. Price, Robert Trivers , David Haig , Helena Cronin , David Hull , Philip Kitcher , and Daniel C. Dennett .
The gene-centric view has been opposed by Ernst Mayr , Stephen Jay Gould , David Sloan Wilson , and philosopher Elliott Sober . An alternative, multilevel selection (MLS), has been advocated by E. O. Wilson , David Sloan Wilson, Sober, Richard E. Michod, [ 33 ] and Samir Okasha . [ 33 ]
Writing in the New York Review of Books , Gould has characterized the gene-centered perspective as confusing book-keeping with causality . Gould views selection as working on many levels, and has called attention to a hierarchical perspective of selection. Gould also called the claims of Selfish Gene "strict adaptationism ", "ultra-Darwinism", and "Darwinian fundamentalism ", describing them as excessively " reductionist ". He saw the theory as leading to a simplistic "algorithmic" theory of evolution, or even to the re-introduction of a teleological principle . [ 34 ] Mayr went so far as to say "Dawkins' basic theory of the gene being the object of evolution is totally non-Darwinian." [ 35 ]
Gould also addressed the issue of selfish genes in his essay "Caring groups and selfish genes". [ 36 ] Gould acknowledged that Dawkins was not imputing conscious action to genes, but simply using a shorthand metaphor commonly found in evolutionary writings. To Gould, the fatal flaw was that "no matter how much power Dawkins wishes to assign to genes, there is one thing that he cannot give them – direct visibility to natural selection." [ 36 ] Rather, the unit of selection is the phenotype, not the genotype, because it is phenotypes that interact with the environment at the natural-selection interface. So, in Kim Sterelny 's summation of Gould's view, "gene differences do not cause evolutionary changes in populations, they register those changes." [ 37 ] Richard Dawkins replied to this criticism in a later book, The Extended Phenotype , that Gould confused particulate genetics with particulate embryology, stating that genes do "blend", as far as their effects on developing phenotypes are concerned, but that they do not blend as they replicate and recombine down the generations. [ 11 ]
Since Gould's death in 2002, Niles Eldredge has continued with counter-arguments to gene-centered natural selection. [ 38 ] Eldredge notes that in Dawkins' book A Devil's Chaplain , which was published just before Eldredge's book, "Richard Dawkins comments on what he sees as the main difference between his position and that of the late Stephen Jay Gould. He concludes that it is his own vision that genes play a causal role in evolution," while Gould (and Eldredge) "sees genes as passive recorders of what worked better than what". [ 39 ] | https://en.wikipedia.org/wiki/Gene-centered_view_of_evolution |
Gene-environment interplay describes how genes and environments work together to produce a phenotype , or observable trait. Many human traits are influenced by gene-environment interplay. It is a key component in understanding how genes and the environment come together to impact human development . Examples of gene-environment interplay include gene-environment interaction and gene-environment correlation . [ 1 ] Another type of gene-environment interplay is epigenetics , which is the study of how environmental factors can affect gene expression without altering DNA sequences. [ 2 ]
To study the effect of the environment on the expression of the human genome, family-based behavioral genetic research methods such as twin , family and adoption studies are used. [ 1 ] Moreover, the identification of genes under environmental influence can be completed through genome-wide association studies. [ 3 ] Research on cases of gene-environment interplay allow for a deeper understanding of the nuances surrounding nature versus nurture debates. Environmental factors can cause deviations from expected gene expression, which ultimately impact cellular processes, such as cell signaling . They can also affect the likelihood of disease . By identifying environmental effects on cellular processes, scientists can gain a better understanding of the mechanisms behind diseases and gain insights into treating them. [ 4 ]
Gene–environment interaction occurs when genetic factors and environmental factors interact to produce an outcome that cannot be explained by either factor alone. [ 6 ] For example, a study found that individuals carrying the genetic variant 5-HTT (the short copy) that encodes the serotonin transporter were at a higher risk of developing depression when exposed to adverse childhood experiences, whereas those with other genotypes (long copy) were less affected by childhood maltreatment . However, there is a caveat as these stressful events may also be caused by an individual's predisposition for getting into these situations. [ 7 ]
Gene–environment correlations describe how different environmental exposures are statistically linked to genes. [ 8 ] These correlations can emerge through multiple different mechanisms, both causal and non-causal. [ 9 ] In regard to causal mechanisms , there are three common types of gene-environment correlations: [ 9 ]
The childhood environment of an individual may be correlated with their inherited genes, since an individual's parents may have selected for their childhood environment. [ 5 ] This type of correlation is considered "passive" since the child's environment is being determined by parental decisions rather than by the child's own decisions. For example, parents who have high openness-to-experience , which is a moderately heritable personality trait, are more likely to provide their children with musical training . [ 10 ] Consequently, a correlation has also been documented between children with more openness-to-experience and their likelihood of receiving musical training as young children.
This type of gene-environment correlation can emerge when an individual's genetics causes others to alter their environment. [ 5 ] For instance, one study on children in middle childhood found that a child's innate desire for autonomy partially determined the degree of maternal control evoked. [ 11 ]
This occurs when individuals seek out environments that are compatible with their genetic predispositions . [ 5 ] For example, a person with a genetic predisposition for athleticism may be more inclined to choose sports-related activities and environments. [ citation needed ]
Epigenetics focuses on cellular changes in gene expression that do not involve changes in genetic code . [ 12 ] Epigenetic changes can be a result of cellular mechanisms or environmental factors. One instance of an environment impacting gene expression is DNA methylation as a result of smoking during pregnancy . [ 13 ] Another environmental exposure that can trigger epigenetic changes is heavy metals like arsenic . This is done through the disturbance of histone acetylation and DNA methylation which is correlated with increased rates of cancer , autoimmune diseases , and neurological disorders . [ 14 ]
Epigenetic modifications can affect gene activity independently of DNA sequence modifications. [ 15 ] Air pollution exposure has been associated with decreased DNA methylation levels which is a process crucial for gene regulation. The effects of air pollution can be seen in the prenatal environment as methylation changed in response to the presence of NO 2 and NO x , which are forms of air pollution. When exposed to air pollution , there was a decline in intrauterine growth . While the mechanism is not fully understood, it could involve the formation of reactive oxygen species , leading to oxidative stress and cellular signaling cascade or increased fetal cortisol levels. [ 16 ] A consequence of altered DNA methylation is hydroxymethylation , which replaces the methyl group with a hydroxyl group . Hydroxymethylation potentially could disrupt gene expression patterns and contribute to disease development, such as lung cancer . [ 17 ] Additionally, exposure to pollutants can exacerbate inflammatory conditions like asthma by inducing inflammation in the airways. This leads to increased cytokine expression and immune cell recruitment. [ 16 ] Certain pollutants, such as endocrine-disrupting chemicals (EDC) , interfere with hormone signaling pathways and gene expression related to hormone regulation. A certain type of EDC, bisphenol A has been linked to changes in gene expression in reproductive tissues and developmental pathways. [ 18 ]
Nutrition plays a crucial role in shaping gene expression, which can ultimately impact an individual's phenotype. Fetal malnutrition , for example, has been associated with decreased level DNA methylation, particularly on genes like IGF2 , which is involved in insulin metabolism. [ 19 ] The alteration in DNA methylation patterns can elevate the risk of developing metabolic disorders and type II diabetes mellitus. [ 20 ] Furthermore, prenatal malnutrition can lead to differential DNA methylation of genes related to growth, development, and metabolism. These epigenetic changes increase the likelihood of adverse phenotypes such as obesity and high cholesterol later in life. [ 21 ] Malnutrition can also significantly impact gene expression in the small intestine , leading to alterations in nutrient transporters, digestive enzymes , barrier function, immune responses, and metabolic adaptation. [ 22 ] Socioeconomic factors such as poverty and minority status may exacerbate the effects of malnutrition. Research indicates that individuals that reside in impoverished communities or those who belong to marginalized racial and ethnic groups may encounter limited access to nutritious food options. [ 23 ]
Physical activity induces epigenetic modifications of specific genes, altering their expression profiles. For example, exercise has been linked to increased methylation of the ASC gene, which typically decreases with age. Methylation can compact a gene, decreasing the amount of protein produced from the gene and the ASC gene stimulates cytokine production. Thus, the expression of inflammatory cytokines decreases. This suppression can help prevent the development of chronic inflammation and associated age-related diseases due to excess inflammatory cytokines. [ 24 ] However, these epigenetic modifications depend on the intensity and type of exercise and are reversible with the cessation of physical activity. [ 25 ] Research shows that exercise for more than six months can have an effect on telomere length. Elongation at the ends of chromosomes helps to maintain chromosomal stability and induces epigenetic modifications of specific genes. [ 26 ]
The maternal environment can have epigenetic effects on the developing fetus. For instance, alcohol consumed during pregnancy can cross from maternal blood to the placenta and into the fetal environment of the amniotic cavity , where it can induce epigenetic modifications on fetal DNA. [ 27 ] Mouse embryo cultures show that alcohol exposure during fetal development can contribute to changes in DNA methylation of genes involved in development, metabolism, and organization of DNA during brain development. [ 28 ] These alcohol-induced changes in DNA methylation during pregnancy contribute to the distinct set of traits seen in Fetal Alcohol Spectrum Disorder (FASD) . [ 28 ] Other instances of prenatal environment impact on fetal epigenetic state include maternal folic acid , stress , and tobacco smoking during pregnancy. [ 29 ] [ 30 ] [ 31 ]
Early life stress encompasses parental absence, abuse, and lack of bonding. These stressors during early childhood are associated with epigenetic modifications of the Hypothalamic-Pituitary-Adrenal (HPA) axis , which mediates the stress response. Using a rat model of maternal care, research has shown that reduced care between mother and offspring is associated with down regulation of glucocorticoid receptors (GR) in the hypothalamus . [ 32 ] GRs play a critical role in the HPA axis by aiding in the restoration of normal physiological state after stress exposure. Down regulation of GRs expression occurs through histone modifications and DNA methylation of the GR gene, resulting in dysregulation of the stress response, including prolonged inflammation and cellular damage. [ 33 ] Additionally, numerous studies have linked early life stress with later-life psychiatric disorders , including anxiety and depression , through epigenetic modulation of genes involved in the HPA axis. [ 34 ] Socioeconomic disparities, discrimination, and cultural factors prevalent within minority communities can contribute to heightened levels of stress and adversity, impacting gene expression and health outcomes. [ 35 ]
Adoption and twin studies are used to investigate the complex interplay between genes and the environment. These studies typically involve the comparison of identical (monozygotic) and fraternal (dizygotic) twins to determine the extent to which genetic factors and environmental influences contribute to variations in traits or behaviors. These studies have contributed to studies of behavior, personality, and psychiatric illnesses. [ 36 ] For example, a Finnish adoption study on schizophrenia revealed that a healthy environment can mitigate the effects of genetics in adopted individuals born to schizophrenic mothers. [ 37 ] Criminal and antisocial behavior have also been found to be influenced by both genetic and environmental factors through these types of studies. [ 38 ] [ 39 ]
Animal models provide a controlled and manipulable environment in which researchers can investigate the complex interactions between genes and environmental factors, shedding light on various biological and behavioral outcomes. For example, one study has demonstrated the utility of mouse models in understanding gene-environment interactions in schizophrenia due to the genetic similarities. [ 40 ]
Research on moths and butterflies has shown that environmental factors like bright sunlight influences their color vision. In environments with more light, they develop more of different opsins which allow them to detect light and discern colors. Butterflies depend on color vision to find the correct flowers for their diet and their preferred habitat . [ 41 ]
Gene-environment interplay has been found to play a part in the majority of diseases. For instance, gene-environment interactions have a prevalent role in mental health disorders; specifically, evidence has found a link to alcohol dependence , [ 39 ] schizophrenia , [ 42 ] and psychosis . [ 43 ] The link to alcohol dependence is potentially influenced by a dopamine receptor gene ( DRD2 ) as individuals with the Taq I allele may have interactions involving this allele and alcohol dependence. [ 39 ] This interaction is more prevalent when the individual is experiencing higher stress levels. The impacts on psychosis originate from a single nucleotide polymorphism (SNP), in the AKT1 gene. This causes its carriers who regularly use cannabis to be more susceptible to developing psychosis. Additionally, individuals who are homozygous for this particular AKT1 mutation and use cannabis daily are at an increased risk for developing psychotic disorders. [ 43 ] For schizophrenia, genome-wide by environment interaction studies (GWEIS) and genome-wide association studies (GWAS) are used to determine the loci at environmental factors used in the determination of GxE. [ 43 ] Evidence also supports that gene-environment interplay is connected to cardiovascular and metabolic conditions . [ 4 ] These include roles in obesity , [ 3 ] pulmonary disease , [ 44 ] and diabetes . [ 45 ] The rise in the incidence of type II diabetes is suggested to be linked to interactions between diet and the FTO and KCNQ1 genes. Mutations within the KCNQ1 gene affects a pathway that leads to a decrease in insulin secretion due to a decline in pancreatic β cells , but within mice fed a high fat diet enhanced the dysfunction within the pancreatic β cells. [ 45 ] | https://en.wikipedia.org/wiki/Gene-environment_interplay |
In the field of genomics , GeneCalling is an open-platform mRNA transcriptional profiling technique. [ 1 ] The GeneCalling protocol measures levels of cDNA , which are correlated with gene expression levels of specific transcripts . Differences between gene expression in healthy tissues and disease or drug responsive tissues are examined and compared in this technology. [ 2 ] The technique has been applied to the study of human tissues [ 3 ] and plant tissues. [ 4 ]
In the GeneCalling protocol, mRNAs are first isolated from a given sample and processed into fragments for analysis. This usually involves the synthesis and subdivision of double-stranded cDNAs from polyA RNA . Distinct sets of restriction enzymes can then be used to digest sets of the divided cDNAs and resulting fragments ligated to labelled adapters to be amplified by PCR . PCR products are then purified and subjected to gel electrophoresis on a mounted platform employing stationary laser excitation and a multi-colour charge-coupled device imaging system. [ 5 ] A fluorescent label at the 5' end of one of the PCR primers allows for visualization of the PCR fragments, and the cDNAs are subjected to several isolated and identical restriction digests to generate a merged profile based on peak height and variance. [ 6 ] The merged digestion profiles from the cDNA preparations are then compared to locate differentially expressed fragments (such as between normal tissue and diseased or drug responsive tissue); these profiles are compared by means of various internet-ready databases such as GeneScape. [ 7 ] | https://en.wikipedia.org/wiki/GeneCalling |
GeneNetwork is a combined database and open-source bioinformatics data analysis software resource for systems genetics . [ 1 ] This resource is used to study gene regulatory networks that link DNA sequence differences to corresponding differences in gene and protein expression and to variation in traits such as health and disease risk. Data sets in GeneNetwork are typically made up of large collections of genotypes (e.g., SNPs ) and phenotypes from groups of individuals, including humans, strains of mice and rats, and organisms as diverse as Drosophila melanogaster , Arabidopsis thaliana , and barley . [ 2 ] The inclusion of genotypes makes it practical to carry out web-based gene mapping to discover those regions of genomes that contribute to differences among individuals in mRNA, protein, and metabolite levels, as well as differences in cell function, anatomy, physiology, and behavior.
Development of GeneNetwork started at the University of Tennessee Health Science Center in 1994 as a web-based version of the Portable Dictionary of the Mouse Genome (1994) . [ 3 ] GeneNetwork is both the first and the longest continuously operating web service in biomedical research [see https://en.wikipedia.org/wiki/List_of_websites_founded_before_1995 ]. In 1999 the Portable Gene Dictionary was combined with Kenneth F. Manly's Map Manager QT mapping program to produce an online system for real-time genetic analysis. [ 4 ] In early 2003, the first large Affymetrix gene expression data sets (whole mouse brain mRNA and hematopoietic stem cells) were incorporated and the system was renamed WebQTL. [ 5 ] [ 6 ] GeneNetwork is now developed by an international group of developers and has mirror and development sites in Europe, Asia, and Australia. Production services are hosted on systems at University of Tennessee Health Science Center with a backup instance in Europe.
A the current production version of GeneNetwork (also known as GN2) was released in 2016. [ 7 ] The current version of GeneNetwork uses the same database as its predecessor, GN1, but has much more modular and maintainable open source code (available on GitHub ). GeneNetwork now also has significant new features including support for:
GeneNetwork consists of two major components:
Four levels of data are usually obtained for each family or population:
The combined data types are housed together in a relational database and IPSF fileserver, and are conceptually organized and grouped by species, cohort, and family. The system is implemented as a LAMP (software bundle) stack. Code and a simplified version of the MariaDB database are available on GitHub .
GeneNetwork is primarily used by researchers, but has also been adopted successfully for undergraduate and graduate courses in genetics and bioinformatics (see YouTube example ), bioinformatics, physiology, and psychology. [ 11 ] Researchers and students typically retrieve sets of genotypes and phenotypes from one or more families and use built-in statistical and mapping functions to explore relations among variables and to assemble networks of associations. Key steps include the analysis of these factors:
Traits and molecular expression data sets are submitted by researchers directly or are extracted from repositories such as National Center for Biotechnology Information Gene Expression Omnibus. Data cover a variety of cells and tissues—from single cell populations of the immune system, specific tissues (retina, prefrontal cortex), to entire systems (whole brain, lung, muscle, heart, fat, kidney, flower, whole plant embryos). A typical data set covers hundreds of fully genotyped individuals and may also include technical and biological replicates. Genotypes and phenotypes are usually taken from peer-reviewed papers. GeneNetwork includes annotation files for several RNA profiling platforms (Affymetrix, Illumina, and Agilent). RNA-seq and quantitative proteomic, metabolomic, epigenetics, and metagenomic data are also available for several species, including mouse and human.
There are tools on the site for a wide range of functions that range from simple graphical displays of variation in gene expression or other phenotypes, scatter plots of pairs of traits (Pearson or rank order), construction of both simple and complex network graphs, analysis of principal components and synthetic traits, QTL mapping using marker regression, interval mapping, and pair scans for epistatic interactions. Most functions work with up to 100 traits and several functions work with an entire transcriptome .
The database can be browsed and searched at the main search page. An on-line tutorial is available. Users can also download the primary data sets as text files, Excel, or in the case of network graphs, as SBML . As of 2017, GN2 is available as a beta release.
GeneNetwork is an open source project released under the Affero General Public License (AGPLv3). The majority of code is written in Python, but includes modules and other code written in C, R, and JavaScript. The code is mainly Python 2.4. GN2 is mainly written in Python 2.7 in a Flask framework with Jinja 2 HTML templates) but with conversion to Python 3.X planned over the next few years. GN2 calls many statistical procedures written in the R programming language . The original source code from 2010 along with a compact database are available on SourceForge . While GN1 was actively maintained through 2019 GitHub , as of 2020 all work is focused on GN2 .
Other systems genetics and network databases | https://en.wikipedia.org/wiki/GeneNetwork |
GenePattern is a freely available computational biology open-source software package originally created and developed at the Broad Institute for the analysis of genomic data. Designed to enable researchers to develop, capture, and reproduce genomic analysis methodologies, GenePattern was first released in 2004. GenePattern is currently developed at the University of California, San Diego .
GenePattern is a powerful scientific workflow system that provides access to hundreds of genomic analysis tools. Use these analysis tools as building blocks to design sophisticated analysis pipelines that capture the methods, parameters, and data used to produce analysis results. Pipelines can be used to create, edit and share reproducible in silico results.
GenePattern is available:
Related software: | https://en.wikipedia.org/wiki/GenePattern |
A GeneRIF or Gene Reference Into Function is a short (255 characters or fewer) statement about the function of a gene. GeneRIFs provide a simple mechanism for allowing scientists to add to the functional annotation of genes described in the Entrez Gene database. In practice, function is constructed quite broadly. For example, there are GeneRIFs that discuss the role of a gene in a disease, GeneRIFs that point the viewer towards a review article about the gene, and GeneRIFs that discuss the structure of a gene. However, the stated intent is for GeneRIFs to be about gene function. Currently over half a million geneRIFs have been created for genes from almost 1000 different species. [ 1 ]
GeneRIFs are always associated with specific entries in the Entrez Gene database. Each GeneRIF has a pointer to the PubMed ID (a type of document identifier) of a scientific publication that provides evidence for the statement made by the GeneRIF. GeneRIFs are often extracted directly from the document that is identified by the PubMed ID, very frequently from its title or from its final sentence.
GeneRIFs are usually produced by NCBI indexers, but anyone may submit a GeneRIF.
To be processed, a valid Gene ID must exist for the specific gene, or the Gene staff must have assigned an overall Gene ID to the species . The latter case is implemented via records in Gene with the symbol NEWENTRY. Once the Gene ID is identified, only three types of information are required to complete a submission :
Here are some GeneRIFs taken from Entrez Gene for GeneID 7157, the human gene TP53 .
The PubMed document identifiers have been omitted from the examples. Note the wide variability with respect to the presence or absence of punctuation and of sentence-initial capital letters.
GeneRIFs are an unusual type of textual genre, and they have recently been the subject of a number of articles from the natural language processing community. | https://en.wikipedia.org/wiki/GeneRIF |
GeneRec is a generalization of the recirculation algorithm , and approximates Almeida-Pineda recurrent backpropagation . [ 1 ] [ 2 ] It is used as part of the Leabra algorithm for error-driven learning . [ 3 ]
The symmetric, midpoint version of GeneRec is equivalent to the contrastive Hebbian learning algorithm (CHL). [ 1 ]
This neuroscience article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/GeneRec |
GeneTalk is a web-based platform, tool, and database for filtering, reduction and prioritization of human sequence variants from next-generation sequencing (NGS) data. [ 1 ] [ 2 ] GeneTalk allows editing annotation about sequence variants and build up a crowd sourced database with clinically relevant information for diagnostics of genetic disorders . GeneTalk allows searching for information about specific sequence variants and connects to experts on variants that are potentially disease-relevant.
Users can upload NGS data in Variant Call Format (VCF) onto the GeneTalk server into their accounts. All entries of the file are preprocessed and shown in the integrated VCF viewer. Filtering tools are set by the user to reduce the number of clinically non-relevant variants. After filtering and prioritization users can interpret relevant variants by retrieving information (annotations) about variants from the GeneTalk database. The communication platform allow users to contact experts about specific variants, genes, or genetic disorders, to exchange knowledge and expertise.
Steps required to analyze VCF files
The following filtering options may be used to reduce the non-relevant sequence variants in VCF files.
Users can share VCF files with colleagues and coworkers. The integrated mailing systems allows users to contact experts easily. Users can create annotations and comments and rate annotations regarding medical relevance and scientific evidence, that is helpful for the community of users for diagnosis of genetic disorders. Registered users provide information about their field of knowledge in their profile and can be contacted by other users. | https://en.wikipedia.org/wiki/GeneTalk |
The GeneXpert Infinity is an automated cartridge-based nucleic acid amplification test (NAAT) which is able to tell whether the subject fluid contains shreds of the SARS-CoV-2 virus, [ 1 ] amongst others. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] It is manufactured by Cepheid Inc.
This article about the COVID-19 pandemic is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/GeneXpert_Infinity |
Gene Designer is a computer software package for bioinformatics . [ 1 ] [ 2 ] It is used by molecular biologists from academia, government, and the pharmaceutical, chemical, agricultural, and biotechnology industries to design, [ 3 ] clone, and validate genetic sequences. It is proprietary software , released as freeware needing registration.
Gene Designer enables molecular biologists to manage the full gene design process in one application, using a range of design tools.
This free software has been incorporated into classroom and lab curricula for synthetic biology, systems biology, bioengineering, and bioinformatics. Students create and complete projects which manage the full gene design process in one application, using a range of design tools.
Examples of use in curricula: | https://en.wikipedia.org/wiki/Gene_Designer |
Gene Ontology ( GO ) term enrichment is a technique for interpreting sets of genes making use of the Gene Ontology system of classification, in which genes are assigned to a set of predefined bins depending on their functional characteristics. For example, the gene FasR is categorized as being a receptor , involved in apoptosis and located on the plasma membrane .
Researchers performing high-throughput experiments that yield sets of genes (for example, genes that are differentially expressed under different conditions) often want to retrieve a functional profile of that gene set, in order to better understand the underlying biological processes . This can be done by comparing the input gene set with each of the bins (terms) in the GO – a statistical test can be performed for each bin to see if it is enriched for the input genes.
The output of the analysis is typically a ranked list of GO terms, each associated with a p-value . [ 1 ]
The Gene Ontology (GO) provides a system for hierarchically classifying genes or gene products into terms organized in a graph structure (or an ontology ). The terms are groups into three categories: molecular function (describing the molecular activity of a gene), biological process (describing the larger cellular or physiological role carried out by the gene, coordinated with other genes), and cellular component (describing the location in the cell where the gene product executes its function). Each gene can be described (annotated) with multiple terms. The GO is actively used to classify genes from humans, model organisms and a variety of other species.
Using the GO, it is possible to retrieve the set of terms used to describe any gene, or conversely, given a term, return the set of genes annotated to that term. For the latter query, the hierarchical system of the GO is employed to give complete results. For example, a query for the GO term for nucleus should return genes annotated to the term "nuclear membrane".
Certain types of high-throughput experiments (e.g., RNA seq ) return sets of genes that are over- or under-expressed. GO can be used to functionally profile this set of genes and to determine which GO terms appear more frequently than would be expected by chance when examining the set of terms annotated to the input genes. For example, an experiment may compare gene expression in healthy cells versus cancerous cells. Functional profiling can be used to elucidate the underlying cellular mechanisms associated with the cancerous condition. This is also called term enrichment or term overrepresentation, as we are testing whether a GO term is statistically enriched for the given set of genes.
There are a variety of methods for performing a term enrichment using GO. Methods may vary according to the type of statistical test applied, the most common being a Fisher's exact test / hypergeometric test . Some methods make use of Bayesian statistics. [ 2 ] There is also variability in the type of correction applied for Multiple comparisons , the most common being the Bonferroni correction .
Methods also vary in their input – some take unranked gene sets, others take ranked gene sets, with more sophisticated methods allowing each gene to be associated with a magnitude (e.g., expression level), avoiding arbitrary cutoffs. | https://en.wikipedia.org/wiki/Gene_Ontology_Term_Enrichment |
Gene amplification refers to a number of natural and artificial processes by which the number of copies of a gene is increased "without a proportional increase in other genes". [ 1 ]
In research or diagnosis DNA amplification can be conducted through methods such as:
DNA replication is a natural form of copying DNA with the amount of genes remaining constant. However, the amount of DNA or the number of genes can also increase within an organism through gene duplication , a major mechanism through which new genetic material is generated during molecular evolution . Common sources of gene duplications include ectopic recombination , retrotransposition event, aneuploidy , polyploidy , and replication slippage . [ 4 ]
A piece of DNA or RNA that is the source and/or product of either natural or artificial amplification or replication events is called an amplicon . [ 5 ] | https://en.wikipedia.org/wiki/Gene_amplification |
Gene amplification in Paramecium tetraurelia is an example of gene amplification that has occurred in the unicellular organism Paramecium tetraurelia .
Gene duplication occurs in a large number of organisms as part of evolution or as the cause or result of disease (as in the case of the amylase genes in humans, and genes in cancer cells respectively). Gene duplication often leads to amplification of their gene products due to transcription and translation of all gene duplicates. [ 1 ] Evidence of gene duplication has been observed the inheritance patterns of Paramecium tetraurelia , a common model organism. [ 2 ] In one strain of P. tetraurelia , d4-95, a recessive mutant allele of a gene known as pawn-B found in this strain is inherited through gene duplication and amplification between generations, and even self-fertilizations. [ 3 ] The inheritance of this allele is the first description of gene duplication and amplification in the micronucleus of ciliates . Additionally, it appears that the duplication of the mutant allele occurred after mutagenesis due to the similarity in nucleotide sequences of different copies of the mutant allele, especially in the coding region. When the d4-95 strain was crossed with a wild-type P. tetraurelia , F2 and later progeny often expressed the phenotype of the pawn-B mutant, despite carrying a wild-type gene at the pawn-B locus. This phenotype was maintained in progeny even after the self-fertilization of theoretical wild-type homozygotes that had been recovered from the cross.
As is the case of other Paramecium , P. tetraurelia exhibits a number of non-Mendelian modes of inheritance, partially due to the existence of both macro- and micronuclei. [ 2 ] In both the macro- and micronucleus of the d4-95 strain of P. tetraurelia contained many more copies of the mutant gene than in the wild type strain. [ 3 ] This occurs due to the ability of most of the extra pawn-B gene copies to be heritable independently from the original pawn-B locus. Additionally, there is evidence that in the development of the macronucleus between generations, there is differential gene duplication of copies of pawn-B which causes variable amplification of the allele – between four and twelve times – and heterogeneity between the copies. This duplication leads to amplification of the gene that suppresses the expression of any non-mutant pawn-B loci. Duplication also occurs in the micronucleus, where considerably higher numbers of copies of the pawn-B mutant have been found than the number of copies of the wild-type non-mutant. Additionally, the number of copies can be decreased in progeny by “diluting” the copies of the mutant allele through backcrossing with the wild-type parent, over multiple generations. As the number of copies of the pawn-B mutant decrease, the progeny eventually return to Mendelian inheritance of the wild-type pawn-B alleles.
While the exact structure of the amplified copies of the allele in the micronucleus is not currently known, it appears to be consistent with the original micronuclear locus, as opposed to the locus of the macronucleus, which includes regions not found in the micronuclear locus. [ 3 ] Additionally, in some of the backcrosses , extra copies of the gene were still present, even after multiple generations. It is not clear whether this is due to tight linkage of the gene copies to the original locus, or because of continued gene duplication of the remaining copies, but it appears that it is more likely due to continued duplication, as the number of copies of the mutant pawn-B vary between the independent clones; it is also possible that there is an extrachromosomal element that plays a role, which has been observed in other protozoa . [ 4 ] | https://en.wikipedia.org/wiki/Gene_amplification_in_Paramecium_tetraurelia |
In biology, a gene cassette is a type of mobile genetic element that contains a gene and a recombination site. Each cassette usually contains a single gene and tends to be very small; on the order of 500–1,000 base pairs . They may exist incorporated into an integron or freely as circular DNA. [ 1 ] Gene cassettes can move around within an organism's genome or be transferred to another organism in the environment via horizontal gene transfer . These cassettes often carry antibiotic resistance genes . An example would be the kanMX cassette which confers kanamycin (an antibiotic ) resistance upon bacteria .
Integrons are genetic structures in bacteria which express and are capable of acquiring and exchanging gene cassettes. The integron consists of a promoter , an attachment site, and an integrase gene that encodes a site-specific recombinase [ 2 ] There are three classes of integrons described. [ 1 ] The mobile units that insert into integrons are gene cassettes. [ 2 ] For cassettes that carry a single gene without a promoter, the entire series of cassettes is transcribed from an adjacent promoter within the integron. [ 3 ] The gene cassettes are speculated to be inserted and excised via a circular intermediate. [ 4 ] This would involve recombination between short sequences found at their termini and known as 59 base elements (59-be)—which may not be 59 bases long. The 59-be are a diverse family of sequences that function as recognition sites for the site-specific integrase (enzyme responsible for integrating the gene cassette into an integron) that occur downstream from the gene coding sequence. [ 5 ]
The ability of genetic elements like gene cassettes to excise and insert into genomes results in highly similar gene regions appearing in distantly related organisms. The three classes of integrons are similar in structure and are identified by where the insertions occur and what systems they coincide with. Class 1 integrons are seen in a diverse group of bacterial genomes and likely are all descendant from one common ancestor. The prevalence of the integron has shaped bacterial evolution by allowing rapid transfer of genes that are novel to an organism, such as antibiotic resistance genes. [ 6 ]
In genetic engineering , a gene cassette is a manipulable fragment of DNA carrying, and capable of expressing, one or more genes of interest between one or more sets of restriction sites . It can be transferred from one DNA sequence (usually on a vector ) to another by 'cutting' the fragment out using restriction enzymes and 'pasting' it back into the new context. The vectors containing the gene of interest typically also carry an antibiotic resistance gene called a selectable marker to easily identify cells that have successfully integrated the vector into their genome.
To introduce a vector into a target cell, a state of competence must be inferred on the cell. This state is induced in the lab by incubating cells with calcium chloride before a brief heat shock, or by electroporation . This makes the cells more susceptible to the plasmid that is being inserted. Once the plasmid has been added, the cells are grown in the presence of an antibiotic to confirm the uptake and expression of the new genetic elements.
The usage of CRISPR/Cas9 systems has shown success in inserting genes into eukaryotic genomes. [ 7 ] While CRISPR modification is still in its infancy, there is significant evidence for usage in combination with other techniques to produce high throughput (HTP) genome editing systems. [ 8 ] Genetic engineering of bacteria for production of a variety of industrial products, including biofuels and specialty chemicals/nutraceuticals is a major area of research. [ 9 ]
Horizontal gene transfer (HGT) is the transfer of genetic elements between cells other than parental inheritance. HGT is responsible for much of the spread of antibiotic resistance among bacteria. [ 10 ] Gene cassettes containing antibiotic resistance genes, or other virulence factors such as exotoxins, can be transferred from cell to cell via phage, transduction , taken up from the environment, transformation , [ 11 ] or by bacterial conjugation. [ 12 ] The ability to transfer gene cassettes between organisms has played a large role in the evolution of prokaryotes. Many commensal organisms, such as E. coli , regularly harbor one or more gene cassettes that convey antibiotic resistance. [ 13 ] Horizontal transfer of genetic elements from non-pathogenic commensals to unrelated species results in highly virulent pathogens that can carry multiple antibiotic resistance genes. The increasing prevalence of resistance creates challenging questions for researchers and physicians. | https://en.wikipedia.org/wiki/Gene_cassette |
Microarray analysis techniques are used in interpreting the data generated from experiments on DNA ( Gene chip analysis ), RNA, and protein microarrays , which allow researchers to investigate the expression state of a large number of genes – in many cases, an organism's entire genome – in a single experiment. [ 1 ] Such experiments can generate very large amounts of data, allowing researchers to assess the overall state of a cell or organism. Data in such large quantities is difficult – if not impossible – to analyze without the help of computer programs.
Microarray data analysis is the final step in reading and processing data produced by a microarray chip. Samples undergo various processes including purification and scanning using the microchip, which then produces a large amount of data that requires processing via computer software. It involves several distinct steps, as outlined in the image below. Changing any one of the steps will change the outcome of the analysis, so the MAQC Project [ 2 ] was created to identify a set of standard strategies. Companies exist that use the MAQC protocols to perform a complete analysis. [ 3 ]
Most microarray manufacturers, such as Affymetrix and Agilent , [ 4 ] provide commercial data analysis software alongside their microarray products. There are also open source options that utilize a variety of methods for analyzing microarray data.
Comparing two different arrays or two different samples hybridized to the same array generally involves making adjustments for systematic errors introduced by differences in procedures and dye intensity effects. Dye normalization for two color arrays is often achieved by local regression . LIMMA provides a set of tools for background correction and scaling, as well as an option to average on-slide duplicate spots. [ 5 ] A common method for evaluating how well normalized an array is, is to plot an MA plot of the data. MA plots can be produced using programs and languages such as R and MATLAB. [ 6 ] [ 7 ]
Raw Affy data contains about twenty probes for the same RNA target. Half of these are "mismatch spots", which do not precisely match the target sequence. These can theoretically measure the amount of nonspecific binding for a given target. Robust Multi-array Average (RMA) [ 8 ] is a normalization approach that does not take advantage of these mismatch spots but still must summarize the perfect matches through median polish . [ 9 ] The median polish algorithm, although robust, behaves differently depending on the number of samples analyzed. [ 10 ] Quantile normalization , also part of RMA, is one sensible approach to normalize a batch of arrays in order to make further comparisons meaningful.
The current Affymetrix MAS5 algorithm, which uses both perfect match and mismatch probes, continues to enjoy popularity and do well in head to head tests. [ 11 ]
Factor analysis for Robust Microarray Summarization (FARMS) [ 12 ] is a model-based technique for summarizing array data at perfect match probe level. It is based on a factor analysis model for which a Bayesian maximum a posteriori method optimizes the model parameters under the assumption of Gaussian measurement noise. According to the Affycomp benchmark [ 13 ] FARMS outperformed all other summarizations methods with respect to sensitivity and specificity.
Many strategies exist to identify array probes that show an unusual level of over-expression or under-expression. The simplest one is to call "significant" any probe that differs by an average of at least twofold between treatment groups. More sophisticated approaches are often related to t-tests or other mechanisms that take both effect size and variability into account. Curiously, the p-values associated with particular genes do not reproduce well between replicate experiments, and lists generated by straight fold change perform much better. [ 14 ] [ 15 ] This represents an extremely important observation, since the point of performing experiments has to do with predicting general behavior. The MAQC group recommends using a fold change assessment plus a non-stringent p-value cutoff, further pointing out that changes in the background correction and scaling process have only a minimal impact on the rank order of fold change differences, but a substantial impact on p-values. [ 14 ]
Clustering is a data mining technique used to group genes having similar expression patterns. Hierarchical clustering , and k-means clustering are widely used techniques in microarray analysis.
Hierarchical clustering is a statistical method for finding relatively homogeneous clusters. Hierarchical clustering consists of two separate phases. Initially, a distance matrix containing all the pairwise distances between the genes is calculated. Pearson's correlation and Spearman's correlation are often used as dissimilarity estimates, but other methods, like Manhattan distance or Euclidean distance , can also be applied. Given the number of distance measures available and their influence in the clustering algorithm results, several studies have compared and evaluated different distance measures for the clustering of microarray data, considering their intrinsic properties and robustness to noise. [ 16 ] [ 17 ] [ 18 ] After calculation of the initial distance matrix, the hierarchical clustering algorithm either (A) joins iteratively the two closest clusters starting from single data points (agglomerative, bottom-up approach, which is fairly more commonly used), or (B) partitions clusters iteratively starting from the complete set (divisive, top-down approach). After each step, a new distance matrix between the newly formed clusters and the other clusters is recalculated. Hierarchical cluster analysis methods include:
Different studies have already shown empirically that the Single linkage clustering algorithm produces poor results when employed to gene expression microarray data and thus should be avoided. [ 18 ] [ 19 ]
K-means clustering is an algorithm for grouping genes or samples based on pattern into K groups. Grouping is done by minimizing the sum of the squares of distances between the data and the corresponding cluster centroid . Thus the purpose of K-means clustering is to classify data based on similar expression. [ 20 ] K-means clustering algorithm and some of its variants (including k-medoids ) have been shown to produce good results for gene expression data (at least better than hierarchical clustering methods). Empirical comparisons of k-means , k-medoids , hierarchical methods and, different distance measures can be found in the literature. [ 18 ] [ 19 ]
Commercial systems for gene network analysis such as Ingenuity [ 21 ] and Pathway studio [ 22 ] create visual representations of differentially expressed genes based on current scientific literature. Non-commercial tools such as FunRich, [ 23 ] GenMAPP and Moksiskaan also aid in organizing and visualizing gene network data procured from one or several microarray experiments. A wide variety of microarray analysis tools are available through Bioconductor written in the R programming language . The frequently cited SAM module and other microarray tools [ 24 ] are available through Stanford University. Another set is available from Harvard and MIT. [ 25 ]
Specialized software tools for statistical analysis to determine the extent of over- or under-expression of a gene in a microarray experiment relative to a reference state have also been developed to aid in identifying genes or gene sets associated with particular phenotypes . One such method of analysis, known as Gene Set Enrichment Analysis (GSEA), uses a Kolmogorov-Smirnov -style statistic to identify groups of genes that are regulated together. [ 1 ] This third-party statistics package offers the user information on the genes or gene sets of interest, including links to entries in databases such as NCBI's GenBank and curated databases such as Biocarta [ 26 ] and Gene Ontology . Protein complex enrichment analysis tool (COMPLEAT) provides similar enrichment analysis at the level of protein complexes. [ 27 ] The tool can identify the dynamic protein complex regulation under different condition or time points. Related system, PAINT [ 28 ] and SCOPE [ 29 ] performs a statistical analysis on gene promoter regions, identifying over and under representation of previously identified transcription factor response elements. Another statistical analysis tool is Rank Sum Statistics for Gene Set Collections (RssGsc), which uses rank sum probability distribution functions to find gene sets that explain experimental data. [ 30 ] A further approach is contextual meta-analysis, i.e. finding out how a gene cluster responds to a variety of experimental contexts. Genevestigator is a public tool to perform contextual meta-analysis across contexts such as anatomical parts, stages of development, and response to diseases, chemicals, stresses, and neoplasms .
Significance analysis of microarrays (SAM) is a statistical technique , established in 2001 by Virginia Tusher, Robert Tibshirani and Gilbert Chu , for determining whether changes in gene expression are statistically significant. With the advent of DNA microarrays , it is now possible to measure the expression of thousands of genes in a single hybridization experiment. The data generated is considerable, and a method for sorting out what is significant and what isn't is essential. SAM is distributed by Stanford University in an R-package . [ 31 ]
SAM identifies statistically significant genes by carrying out gene specific t-tests and computes a statistic d j for each gene j , which measures the strength of the relationship between gene expression and a response variable. [ 32 ] [ 33 ] [ 34 ] This analysis uses non-parametric statistics , since the data may not follow a normal distribution . The response variable describes and groups the data based on experimental conditions. In this method, repeated permutations of the data are used to determine if the expression of any gene is significant related to the response. The use of permutation-based analysis accounts for correlations in genes and avoids parametric assumptions about the distribution of individual genes. This is an advantage over other techniques (e.g., ANOVA and Bonferroni ), which assume equal variance and/or independence of genes. [ 35 ]
the number of permutations is set by the user when imputing correct values for the data set to run SAM
Types: [ 32 ]
SAM calculates a test statistic for relative difference in gene expression based on permutation analysis of expression data and calculates a false discovery rate. The principal calculations of the program are illustrated below. [ 32 ] [ 33 ] [ 34 ]
The s o constant is chosen to minimize the coefficient of variation of d i . r i is equal to the expression levels (x) for gene i under y experimental conditions.
F a l s e d i s c o v e r y r a t e ( F D R ) = M e d i a n ( o r 90 t h p e r c e n t i l e ) o f # o f f a l s e l y c a l l e d g e n e s N u m b e r o f g e n e s c a l l e d s i g n i f i c a n t {\displaystyle \mathrm {False\ discovery\ rate\ (FDR)={\frac {Median\ (or\ 90^{th}\ percentile)\ of\ \#\ of\ falsely\ called\ genes}{Number\ of\ genes\ called\ significant}}} }
Fold changes (t) are specified to guarantee genes called significant change at least a pre-specified amount. This means that the absolute value of the average expression levels of a gene under each of two conditions must be greater than the fold change (t) to be called positive and less than the inverse of the fold change (t) to be called negative.
The SAM algorithm can be stated as:
Entire arrays may have obvious flaws detectable by visual inspection, pairwise comparisons to arrays in the same experimental group, or by analysis of RNA degradation. [ 39 ] Results may improve by removing these arrays from the analysis entirely.
Depending on the type of array, signal related to nonspecific binding of the fluorophore can be subtracted to achieve better results. One approach involves subtracting the average
signal intensity of the area between spots. A variety of tools for background correction and further analysis are available from TIGR, [ 40 ] Agilent ( GeneSpring ), [ 41 ] and Ocimum Bio Solutions (Genowiz). [ 42 ]
Visual identification of local artifacts, such as printing or washing defects, may likewise suggest the removal of individual spots. This can take a substantial amount of time depending on the quality of array manufacture. In addition, some procedures call for the elimination of all spots with an expression value below a certain intensity threshold. | https://en.wikipedia.org/wiki/Gene_chip_analysis |
A gene co-expression network (GCN) is an undirected graph , where each node corresponds to a gene , and a pair of nodes is connected with an edge if there is a significant co-expression relationship between them. [ 1 ] Having gene expression profiles of a number of genes for several samples or experimental conditions, a gene co-expression network can be constructed by looking for pairs of genes which show a similar expression pattern across samples, since the transcript levels of two co-expressed genes rise and fall together across samples. Gene co-expression networks are of biological interest since co-expressed genes are controlled by the same transcriptional regulatory program, functionally related, or members of the same pathway or protein complex. [ 2 ]
The direction and type of co-expression relationships are not determined in gene co-expression networks; whereas in a gene regulatory network (GRN) a directed edge connects two genes, representing a biochemical process such as a reaction, transformation, interaction, activation or inhibition. [ 3 ] Compared to a GRN, a GCN does not attempt to infer the causality relationships between genes and in a GCN the edges represent only a correlation or dependency relationship among genes. [ 4 ] Modules or the highly connected subgraphs in gene co-expression networks correspond to clusters of genes that have a similar function or involve in a common biological process which causes many interactions among themselves. [ 3 ]
Gene co-expression networks are usually constructed using datasets generated by high-throughput gene expression profiling technologies such as Microarray or RNA-Seq . Co-expression networks are used to analyze single cell RNA-Seq data, in order to better characterize the gene to gene relations in a cohort of cells from a specific cell type. [ 5 ]
The concept of gene co-expression networks was first introduced by Butte and Kohane in 1999 as relevance networks . [ 6 ] They gathered the measurement data of medical laboratory tests (e.g. hemoglobin level ) for a number of patients and they calculated the Pearson correlation between the results for each pair of tests and the pairs of tests which showed a correlation higher than a certain level were connected in the network (e.g. insulin level with blood sugar). Butte and Kohane used this approach later with mutual information as the co-expression measure and using gene expression data for constructing the first gene co-expression network. [ 7 ]
A good number of methods have been developed for constructing gene co-expression networks. In principle, they all follow a two step approach: calculating co-expression measure, and selecting significance threshold. In the first step, a co-expression measure is selected and a similarity score is calculated for each pair of genes using this measure. Then, a threshold is determined and gene pairs which have a similarity score higher than the selected threshold are considered to have a significant co-expression relationship and are connected by an edge in the network.
The input data for constructing a gene co-expression network is often represented as a matrix. If we have the gene expression values of m genes for n samples (conditions), the input data would be an m×n matrix, called expression matrix. For instance, in a microarray experiment the expression values of thousands of genes are measured for several samples. In first step, a similarity score (co-expression measure) is calculated between each pair of rows in expression matrix. The resulting matrix is an m×m matrix called the similarity matrix. Each element in this matrix shows how similarly the expression levels of two genes change together. In the second step, the elements in the similarity matrix which are above a certain threshold (i.e. indicate significant co-expression) are replaced by 1 and the remaining elements are replaced by 0. The resulting matrix, called the adjacency matrix , represents the graph of the constructed gene co-expression network. In this matrix, each element shows whether two genes are connected in the network (the 1 elements) or not (the 0 elements).
The expression values of a gene for different samples can be represented as a vector, thus calculating the co-expression measure between a pair of genes is the same as calculating the selected measure for two vectors of numbers.
Pearson's correlation coefficient , Mutual Information , Spearman's rank correlation coefficient and Euclidean distance are the four mostly used co-expression measures for constructing gene co-expression networks. Euclidean distance measures the geometric distance between two vectors, and so considers both the direction and the magnitude of the vectors of gene expression values. Mutual information measures how much knowing the expression levels of one gene reduces the uncertainty about the expression levels of another. Pearson’s correlation coefficient measures the tendency of two vectors to increase or decrease together, giving a measure of their overall correspondence. Spearman's rank correlation is the Pearson’s correlation calculated for the ranks of gene expression values in a gene expression vector. [ 2 ] Several other measures such as partial correlation , [ 8 ] regression , [ 9 ] and combination of partial correlation and mutual information [ 10 ] have also been used.
Each of these measures have their own advantages and disadvantages. The Euclidean distance is not appropriate when the absolute levels of functionally related genes are highly different. Furthermore, if two genes have consistently low expression levels but are otherwise randomly correlated, they might still appear close in Euclidean space. [ 2 ] One advantage to mutual information is that it can detect non-linear relationships; however this can turn into a disadvantage because of detecting sophisticated non-linear relationships which does not look biologically meaningful. In addition, for calculating mutual information one should estimate the distribution of the data which needs a large number of samples for a good estimate. Spearman’s rank correlation coefficient is more robust to outliers, but on the other hand it is less sensitive to expression values and in datasets with small number of samples may detect many false positives.
Pearson’s correlation coefficient is the most popular co-expression measure used in constructing gene co-expression networks. The Pearson's correlation coefficient takes a value between -1 and 1 where absolute values close to 1 show strong correlation. The positive values correspond to an activation mechanism where the expression of one gene increases with the increase in the expression of its co-expressed gene and vice versa. When the expression value of one gene decreases with the increase in the expression of its co-expressed gene, it corresponds to an underlying suppression mechanism and would have a negative correlation.
There are two disadvantages for Pearson correlation measure: it can only detect linear relationships and it is sensitive to outliers. Moreover, Pearson correlation assumes that the gene expression data follow a normal distribution. Song et al. [ 11 ] have suggested biweight midcorrelation (bicor) as a good alternative for Pearson’s correlation. "Bicor is a median based correlation measure, and is more robust than the Pearson correlation but often more powerful than the Spearman's correlation". Furthermore, it has been shown that "most gene pairs satisfy linear or monotonic relationships" which indicates that "mutual information networks can safely be replaced by correlation networks when it comes to measuring co-expression relationships in stationary data [ 11 ] ".
Several methods have been used for selecting a threshold in constructing gene co-expression networks. A simple thresholding method is to choose a co-expression cutoff and select relationships which their co-expression exceeds this cutoff. Another approach is to use Fisher’s Z-transformation which calculates a z-score for each correlation based on the number of samples. This z-score is then converted into a p-value for each correlation and a cutoff is set on the p-value. Some methods permute the data and calculate a z-score using the distribution of correlations found between genes in permuted dataset. [ 2 ] Some other approaches have also been used such as threshold selection based on clustering coefficient [ 12 ] or random matrix theory. [ 13 ]
The problem with p-value based methods is that the final cutoff on the p-value is chosen based on statistical routines(e.g. a p-value of 0.01 or 0.05 is considered significant), not based on a biological insight.
WGCNA is a framework for constructing and analyzing weighted gene co-expression networks . [ 14 ] The WGCNA method selects the threshold for constructing the network based on the scale-free topology of gene co-expression networks. This method constructs the network for several thresholds and selects the threshold which leads to a network with scale-free topology. Moreover, the WGCNA method constructs a weighted network which means all possible edges appear in the network, but each edge has a weight which shows how significant is the co-expression relationship corresponding to that edge. Of note, threshold selection is intended to coerce networks into a scale-free topology. However, the underlying premise that biological networks are scale-free is contentious. [ 15 ] [ 16 ] [ 17 ]
lmQCM is an alternative for WGCNA achieving the same goal of gene co-expression networks analysis. lmQCM , [ 18 ] stands for local maximal Quasi-Clique Merger, aiming to exploit the locally dense structures in the network, thus can mine smaller and densely co-expressed modules by allowing module overlapping. the algorithm lmQCM has its R package and python module (bundled in Biolearns). The generally smaller size of mined modules can also generate more meaningful gene ontology (GO) enrichment results.
Co-expression networks try to estimate the direct and sometimes the indirect correlations between pairs of genes. However, an individual gene may be controlled by multiple regulators. [ 19 ] Second, as discussed in the previous sections, each co-expression computational measure is designed specifically to capture a unique feature that is not necessarily optimal for depicting all types of gene-to-gene transcriptional inter-relation, for example, Pearson correlation for linear relations, Spearman for the ranking of the genes, and so on. Third and last, calculating the gene to gene co-expression networks for whole genome results in very large matrices which contain a considerable amount of noise, which raises a significant difficulty in exploring their differentiation across cohorts. These challenges should be referred when applying advanced methods of co-expression on gene expression data. | https://en.wikipedia.org/wiki/Gene_co-expression_network |
Gene delivery is the process of introducing foreign genetic material, such as DNA or RNA , into host cells . [ 1 ] Gene delivery must reach the genome of the host cell to induce gene expression . [ 2 ] Successful gene delivery requires the foreign gene delivery to remain stable within the host cell and can either integrate into the genome or replicate independently of it. [ 3 ] This requires foreign DNA to be synthesized as part of a vector , which is designed to enter the desired host cell and deliver the transgene to that cell's genome. [ 4 ] Vectors utilized as the method for gene delivery can be divided into two categories, recombinant viruses and synthetic vectors (viral and non-viral). [ 2 ] [ 5 ]
In complex multicellular eukaryotes (more specifically Weissmanists ), if the transgene is incorporated into the host's germline cells, the resulting host cell can pass the transgene to its progeny . If the transgene is incorporated into somatic cells, the transgene will stay with the somatic cell line, and thus its host organism. [ 6 ]
Gene delivery is a necessary step in gene therapy for the introduction or silencing of a gene to promote a therapeutic outcome in patients and also has applications in the genetic modification of crops. There are many different methods of gene delivery for various types of cells and tissues. [ 6 ]
Viral based vectors emerged in the 1980s as a tool for transgene expression. In 1983, Albert Siegel described the use of viral vectors in plant transgene expression although viral manipulation via cDNA cloning was not yet available. [ 7 ] The first virus to be used as a vaccine vector was the vaccinia virus in 1984 as a way to protect chimpanzees against hepatitis B . [ 8 ] Non-viral gene delivery was first reported on in 1943 by Avery et al. who showed cellular phenotype change via exogenous DNA exposure. [ 9 ]
There are a variety of methods available to deliver genes to host cells. When genes are delivered to bacteria or plants the process is called transformation and when it is used to deliver genes to animals it is called transfection . This is because transformation has a different meaning in relation to animals, indicating progression to a cancerous state. [ 10 ] For some bacteria no external methods are need to introduce genes as they are naturally able to take up foreign DNA . [ 11 ] Most cells require some sort of intervention to make the cell membrane permeable to DNA and allow the DNA to be stably inserted into the hosts genome .
Chemical based methods of gene delivery can use natural or synthetic compounds to form particles that facilitate the transfer of genes into cells. [ 2 ] These synthetic vectors have the ability to electrostatically bind DNA or RNA and compact the genetic information to accommodate larger genetic transfers. [ 5 ] Chemical vectors usually enter cells by endocytosis and can protect genetic material from degradation. [ 6 ]
One of the simplest method involves altering the environment of the cell and then stressing it by giving it a heat shock . Typically the cells are incubated in a solution containing divalent cations (often calcium chloride ) under cold conditions, before being exposed to a heat pulse. Calcium chloride partially disrupts the cell membrane, which allows the recombinant DNA to enter the host cell. It is suggested that exposing the cells to divalent cations in cold condition may change or weaken the cell surface structure, making it more permeable to DNA. The heat-pulse is thought to create a thermal imbalance across the cell membrane, which forces the DNA to enter the cells through either cell pores or the damaged cell wall.
Another simple methods involves using calcium phosphate to bind the DNA and then exposing it to cultured cells. The solution, along with the DNA, is encapsulated by the cells and a small amount of DNA can be integrated into the genome. [ 12 ]
Liposomes and polymers can be used as vectors to deliver DNA into cells. Positively charged liposomes bind with the negatively charged DNA, while polymers can be designed that interact with DNA. [ 2 ] They form lipoplexes and polyplexes respectively, which are then up-taken by the cells. [ 13 ] The two systems can also be combined. [ 6 ] Polymer-based non-viral vectors uses polymers to interact with DNA and form polyplexes. [ 6 ]
The use of engineered inorganic and organic nanoparticles is another non-viral approach for gene delivery. [ 14 ] [ 15 ]
Artificial gene delivery can be mediated by physical methods which uses force to introduce genetic material through the cell membrane. [ 2 ]
Electroporation is a method of promoting competence . Cells are briefly shocked with an electric field of 10-20 kV /cm, which is thought to create holes in the cell membrane through which the plasmid DNA may enter. After the electric shock, the holes are rapidly closed by the cell's membrane-repair mechanisms.
Another method used to transform plant cells is biolistics , where particles of gold or tungsten are coated with DNA and then shot into young plant cells or plant embryos. [ 16 ] Some genetic material enters the cells and transforms them. This method can be used on plants that are not susceptible to Agrobacterium infection and also allows transformation of plant plastids . Plants cells can also be transformed using electroporation, which uses an electric shock to make the cell membrane permeable to plasmid DNA. Due to the damage caused to the cells and DNA the transformation efficiency of biolistics and electroporation is lower than agrobacterial transformation. [ 17 ]
Microinjection is where DNA is injected through the cell's nuclear envelope directly into the nucleus . [ 11 ]
Sonoporation is the transient permeation of cell membranes assisted by ultrasound , typically in the presence of gas microbubbles . [ 18 ] Sonoporation allows for the entry of genetic material into cells. [ 19 ] [ 20 ]
Photoporation is when laser pulses are used to create pores in a cell membrane to allow entry of genetic material.
Magnetofection uses magnetic particles complexed with DNA and an external magnetic field concentrate nucleic acid particles into target cells.
A hydrodynamic capillary effect can be used to manipulate cell permeability.
In plants the DNA is often inserted using Agrobacterium -mediated recombination , [ 21 ] taking advantage of the Agrobacterium s T-DNA sequence that allows natural insertion of genetic material into plant cells. [ 22 ] Plant tissue are cut into small pieces and soaked in a fluid containing suspended Agrobacterium . The bacteria will attach to many of the plant cells exposed by the cuts. The bacteria uses conjugation to transfer a DNA segment called T-DNA from its plasmid into the plant. The transferred DNA is piloted to the plant cell nucleus and integrated into the host plants genomic DNA.The plasmid T-DNA is integrated semi-randomly into the genome of the host cell. [ 23 ]
By modifying the plasmid to express the gene of interest, researchers can insert their chosen gene stably into the plants genome. The only essential parts of the T-DNA are its two small (25 base pair) border repeats, at least one of which is needed for plant transformation. [ 24 ] [ 25 ] The genes to be introduced into the plant are cloned into a plant transformation vector that contains the T-DNA region of the plasmid . An alternative method is agroinfiltration . [ 26 ] [ 27 ]
Virus mediated gene delivery utilizes the ability of a virus to inject its DNA inside a host cell and takes advantage of the virus' own ability to replicate and implement their own genetic material. Viral methods of gene delivery are more likely to induce an immune response, but they have high efficiency. [ 6 ] Transduction is the process that describes virus-mediated insertion of DNA into the host cell. Viruses are a particularly effective form of gene delivery because the structure of the virus prevents degradation via lysosomes of the DNA it is delivering to the nucleus of the host cell. [ 28 ] In gene therapy a gene that is intended for delivery is packaged into a replication-deficient viral particle to form a viral vector . [ 29 ] Viruses used for gene therapy to date include retrovirus, adenovirus, adeno-associated virus and herpes simplex virus. However, there are drawbacks to using viruses to deliver genes into cells. Viruses can only deliver very small pieces of DNA into the cells, it is labor-intensive and there are risks of random insertion sites, cytopathic effects and mutagenesis. [ 30 ]
Viral vector based gene delivery uses a viral vector to deliver genetic material to the host cell. This is done by using a virus that contains the desired gene and removing the part of the viruses genome that is infectious. [ 2 ] Viruses are efficient at delivering genetic material to the host cell's nucleus, which is vital for replication. [ 2 ]
RNA-based viruses were developed because of the ability to transcribe directly from infectious RNA transcripts. RNA vectors are quickly expressed and expressed in the targeted form since no processing is required [source needed]. Retroviral vectors include oncoretroviral, lentiviral and human foamy virus are RNA-based viral vectors that reverse transcript and integrated into the host genome, permits long-term transgene expression . [ 2 ]
DNA-based viral vectors include Adenoviridae , adeno-associated virus and herpes simplex virus . [ 2 ]
Several of the methods used to facilitate gene delivery have applications for therapeutic purposes. Gene therapy utilizes gene delivery to deliver genetic material with the goal of treating a disease or condition in the cell. Gene delivery in therapeutic settings utilizes non- immunogenic vectors capable of cell specificity that can deliver an adequate amount of transgene expression to cause the desired effect. [ 3 ]
Advances in genomics have enabled a variety of new methods and gene targets to be identified for possible applications. DNA microarrays used in a variety of next-gen sequencing can identify thousands of genes simultaneously, with analytical software looking at gene expression patterns, and orthologous genes in model species to identify function. [ 31 ] This has allowed a variety of possible vectors to be identified for use in gene therapy. As a method for creating a new class of vaccine, gene delivery has been utilized to generate a hybrid biosynthetic vector to deliver a possible vaccine. This vector overcomes traditional barriers to gene delivery by combining E. coli with a synthetic polymer to create a vector that maintains plasmid DNA while having an increased ability to avoid degradation by target cell lysosomes. [ 32 ] | https://en.wikipedia.org/wiki/Gene_delivery |
Gene deserts are regions of the genome that are devoid of protein-coding genes. Gene deserts constitute an estimated 25% of the entire genome, leading to the recent interest in their true functions. [ 1 ] Originally believed to contain inessential and " Junk DNA " due to their inability to create proteins, gene deserts have since been linked to several vital regulatory functions, including distal enhancing and conservatory inheritance. Thus, an increasing number of risks that lead to several major diseases, including a handful of cancers, have been attributed to irregularities found in gene deserts.
One of the most notable examples is the 8q24 gene region, which, when affected by certain single nucleotide polymorphisms , lead to a myriad of diseases. The major identifying factors of gene deserts lay in their low GpC content and their relatively high levels of repeats, which are not observed in coding regions. Recent studies have even further categorized gene deserts into variable and stable forms; regions are categorized based on their behavior through recombination and their genetic contents. Although current knowledge of gene deserts is rather limited, ongoing research and improved techniques are beginning to open the doors for exploration on the various important effects of these noncoding regions.
Although the possibility of function in gene deserts was predicted as early as the 1960s, genetic identification tools were unable to uncover any specific characteristics of the long noncoding regions, other than that no coding occurred in those regions. [ 2 ]
Before the completion of the human genome in 2001 through the Human Genome Project , most of the early associative gene comparisons relied on the belief that essential housekeeping genes were clustered in the same areas of the genome for ease of access and tight regulation. This belief later constructed a hypothesis that gene deserts are therefore previous regulatory sequences that are highly linked (and hence do not undergo recombination), but have had substitutions between them over time. [ 2 ] [ 3 ] These substitutions could cause tightly conserved genes to separate over time, thus forming regions of nonsense codes with a few essential genes. However, uncertainty due to differential gene conservation rates in different portions of chromosomes prevented accurate identification.
Later associations were remodeled when regulatory sequences were associated with transcription factors, leading to the birth of large-scale genome-wide mapping. Thus began the hunt for the contents and functions of gene deserts.
Recent advancements in the screening of chromatin signatures on chromosomes (for instance, chromosome conformation capture , also known as 3C) have allowed the confirmation of the long-range gene activation model, which postulates that there are indeed physical links between regulatory enhancers and their target promoters . [ 2 ] Research on gene deserts, although centralized on human genetics, has also been applied to mice, various birds, and Drosophila melanogaster . [ 4 ] [ 5 ] Although conservation is variable among selected species’ genomes, orthologous gene deserts function similarly. Thus, the prevailing the contention of gene deserts is that these noncoding sequences harbor active and important regulatory elements.
One study focused on a regulatory archipelago, a region with “islands” of coding sequences surrounded by vast noncoding regions. The study, which explored the effects of regulation on the hox genes , initially focused on two enhancer sequences, GCR and Prox, which are located 200 basepairs and 50 basepairs respectively upstream of the Hox D locus. [ 5 ] To manipulate the region, the study inverted the two enhancer sequences and discovered no major effects on the transcription of the Hox D gene, even though the two sequences were the closest sequences to the gene. Thus, the turned to the gene desert that flanked the GCR sequence upstream and found 5 regulatory islands within it that could regulate the gene. To select the most likely candidate, the study then applied several individual and multiple deletions to the five islands to observe the effects. These varied deletions only resulted in minor effects including physical abnormalities or a few missing digits.
When the experiment was taken a step further and applied a deletion of the entire 830 kilobase gene desert, the functionality of the entire Hox D locus was rendered inactive. [ 5 ] This indicates that the neighboring gene desert, as an entire 830 kilobase unit (including the five island sequences within it), serves as an important regulator of a single gene that spans merely 50 kilobases. Therefore, these results hinted at the regulatory effects of flanking gene deserts. This study was supported by a later observation through a comparison between fluorescence in situ hybridization and chromosome conformation capture which discovered that the Hox D locus was the most decondensed portion in the region. This meant that it had relatively higher activity in comparison to the flanking gene deserts. [ 6 ] Hence, the Hox D could be regulated by specific nearby enhancer sequences that were not expressed in unison. However, this does caution that proximity is inaccurate when either analytical method is used. [ 6 ] Thus, associations between regulatory gene deserts and their target promoters seem to have variable distances and are not required to act as borders.
The variability in distance demonstrates that distance may be another important factor that is determined by gene deserts. For instance, distal enhancers may interact with their target promoters through looping interactions which must act over a certain distance. [ 7 ] Thus, proximity is not an accurate predictor of enhancers: enhancers do not need to border their target sequence to regulate them. While this leads to a variation in distances, the average distance between transcription start sites and the interaction complex mediated by their enhancer elements is 120 kilobases upstream of the start site. [ 7 ]
Gene deserts may play a role in constructing this distance to allow maximal looping to occur. Given that the mechanism of enhancer complex formation is a fairly simply regulated mechanism (the structures that are recruited into the enhancing complex have various regulatory controls that control construction), more than 50% of promoters have several long-range interactions. Certain core genes even have up to 20 possible enhancing interactions. There is a curious bias for complexes to form only upstream of the promoters. [ 7 ] Thus, given the correlation that many regulatory gene deserts appear upstream of their target promoters, it is possible that the more immediate role that gene deserts play is in long-range regulation of key sequences.
As the ideal formation of enhancer interactions requires specific constructs, a possible side-product of the regulatory roles of gene deserts may be the conservation of genes: to retain the specific lengths of loops and order of regulating genes hidden in gene deserts, certain portions of gene deserts are more highly conserved than others when passing through inheritance events. These conserved noncoding sequences (CNS) are directly associated with syntenic inheritance in all vertebrates. [ 8 ] Thus, the presence of these CNSs could serve to conserve of large regions of genes.
Although distance may vary in regulatory gene deserts, distance appears to have an upper limit in conservative gene deserts. CNSs were initially thought to occur close to their conserved genes: earlier estimates placed most CNSs in proximity of gene sequences. [ 8 ] However, the expansion of genetic data has revealed that several CNSs reside up to 2.5 megabases from their target genes, with the majority of CNSs falling between 1 and 2 megabases. This range, which was measured for the human genome, is varied among different species. For instance, in comparison to humans, the Fugu fish has a smaller range, with an estimated maximum distance of a few hundred kilobases. Regardless of the difference in lengths, CNSs work in similar methods in both species. [ 8 ] Thus, as functions differ between gene deserts, so do their contents.
Certain gene deserts are heavy regulators, while others may be deleted without any effect. As a possible classification, gene deserts can be broken down into two subtypes: stable and variable. [ 1 ] Stable gene deserts have fewer repeats and have relatively higher Guanine to Cytosine (GpC) content than observed in variable gene deserts.
Guanine and cytosine content is indicative of protein-coding functionality. For example, in a study on chromosomes 2 and 4, which have been linked to several genetic diseases, there were elevated GpC content in certain regions. [ 9 ] Mutations in these GC-rich regions caused a variety of diseases, revealing the necessary integrity of these genes. High density CpG regions serve as regulatory regions for DNA methylation . [ 10 ] Therefore, essential coding genes should be represented by high-CpG regions. In particular, regions with high GC content should tend to have high densities of genes that are devoted mainly to the essential housekeeping and tissue specific processes. [ 11 ] These processes would require the most protein production to express functionality. Stable gene deserts, which have higher levels of GC content, should therefore contain the essential enhancer sequences. This could determine the conservatory functions of stable gene deserts.
On the other hand, approximately 80% of gene deserts have low GpC contents, indicating that they have very few essential genes. [ 9 ] Thus, the majority of gene deserts are variable gene deserts, which may have alternate functions. One prevalent theory regarding the origins of gene deserts postulates that gene deserts are accumulations of essential genes that act as a distance. [ 1 ] [ 10 ] This may hold true, as given the low numbers of essential genes within them, these regions would have been less conserved. As a result, due to the prevalence of cytosine to thymine conversions, the most common SNP , would cause a gradual separation between the few essential genes within variable gene deserts. These essential sequences would have been maintained and conserved, leading to small regions of high density that regulate at a distance. [ 10 ] GC content is therefore indication for the presence of coding or regulatory processes in DNA.
While stable gene deserts have higher GC content, this relative value is only an average. Within stable gene deserts, although the ends contain very high levels of GC content, the main bulk of the DNA contains even less GC content than observed in variable gene deserts. This indicates that there are very few highly conserved regions in stable gene deserts that do not recombine, or do so at very low rates. [ 9 ] Given that the ends of the stable gene deserts have particularly high levels of GC contents, these sequences must be extremely conserved. This conservation may in turn cause the flanking genes to also have higher conservation rates. Thus, stable genes should be directly linked to at least one of their flanking genes and cannot be separated from coding sequences by recombination events. [ 1 ] Most gene deserts appear to cluster in pairs around a small number of genes. This clustering creates long loci that have very low gene density; small regions with high numbers of genes are surrounded by long stretches of gene deserts, creating a low gene average. Therefore, the minimized probability of recombination events in these long loci creates syntenic blocks that are inherited together over time. [ 1 ] These syntenic blocks can be conserved for very long periods of time, preventing loss of essential material, even while the distance between essential genes may grow in time.
Although this effect should theoretically be amplified through the even lower GC-content in variable gene deserts (thereby truly minimalizing gene density), the gene conservation rates in variable gene deserts are even lower than observed in stable gene deserts—in fact, the rate is far lower than the rest of the genome. A possible explanation for this phenomenon is that variable gene deserts may be recently evolved regions that have not yet been fixed into stable gene deserts. [ 1 ] Therefore, shuffling may still occur before stabilizing regions within the variable gene deserts begin to cluster as whole units. There are a few exceptions to this minimal rate of conservation, as a few GC gene deserts are subjected to hypermethylation, which greatly reduces the accessibility to the DNA, thus effectively protecting the region from recombination. [ 11 ] However, these occur rarely in observation.
Although stable and variable gene deserts differ in content and function, both wield conservatory abilities. It is possible that since most variable gene deserts have regulatory elements that can act at a distance, conservation of the entire gene desert into a sytenic locus would not have been necessary, so long as these regulatory elements themselves were conserved as units. Given the particularly low levels of GC content, the regulatory elements would therefore be in a minimal gene density situation as observed similarly in flanking stable gene deserts, with the same effect. Thus, both types of gene deserts serve to retain essential genes within the genome.
The conservative nature of gene deserts confirms that these stretches of noncoding bases are essential to proper functioning. Indeed, a wide range of studies on irregularities in the noncoding genes discovered several associations to genetic diseases. One of the most studied gene deserts is the 8q24 region. Early genome wide association studies were focused on the 8q24 region (residing on chromosome 8 ) due to the abnormally high rates of SNPs that seem to occur in the region. These studies found that the region was linked to increased risks for a variety of cancers, notably in the prostate, breast, ovaries, colonic, and pancreas. [ 12 ] [ 13 ] Using inserts of the gene desert into bacterial artificial chromosomes, one study was able to produce enhancer activity in certain regions, which were isolated via cloning systems. [ 14 ] This study successfully identified an enhancer sequence hidden in the region. Within this enhancer sequence, an SNP that conferred risk for prostate cancer, labeled SNP s6983267, was discovered in diseased mice. However, the 8q24 region is not solely limited to conferred risks of prostate cancer. A study in 2008 screened human subjects (and controls) with variations in the gene desert region, discovering five different regions that conferred different risks when affected by different SNPs. [ 12 ] This study used identified SNP markers in the gene desert to identify risk conference from each of the regions to a specific tissue expression. Although these risks were successfully linked to various forms of cancer, Ghoussaini, M., et al. note their uncertainty in whether the SNPs functioned merely as markers or were the direct causants of the cancers.
These varied effects occur due to the different interactions between the SNPs in this region and MYC promoters of different organs. The MYC promoter, which is located at a short distance downstream of the 8q24 region, is perhaps the most studied oncogene due to its association with a myriad of diseases. [ 13 ] Normal functioning of the MYC promoter ensures that cells divide regularly. The study postulates that the 8q region, which underwent a chromosomal translocation in humans, could have moved an essential enhancer for the MYC promoter. [ 13 ] This areas around this region could have been subjected to recombination that may have hidden the essential MYC enhancer within the gene desert through time, although its enhancing effects are still very much retained. This analysis stems from disease associations observed in several mice species where this region is retained at proximity to the MYC promoter. [ 13 ] Thus, the 8q24 gene desert should have been somewhat linked to the MYC promoter. The desert resembles a stable gene desert that has had very little recombination after the translocation event. Thus, a potential hypothesis is that SNPs affecting this region disrupt the important tissue-specific genes with the stable gene desert, which could explain the risks of cancer in various tissue forms. This effect of hidden enhancer elements can also be observed in other locations in the genome. For instance, SNPs in the 5p13.1 deregulate the PTGER4 coding region, leading to Crohn's Disease. [ 15 ] Another affected region in the 9p21 gene desert causes several coronary artery diseases. [ 16 ] However, none of these risk-conferring gene deserts seem to be affected as much as the 8q24 regions. Current studies are still unsure about the SNP-affected processes in the 8q24 region that result in particularly amplified responses to the MYC promoter. With the aid of a more accessible population and more specific markers for genome wide association mapping, an increasing number of risk alleles are now being marked in gene deserts, where small, isolated, and seemingly-unimportant regions of genes may moderate important genes.
The majority of the contents in gene deserts are still likely to be disposable. [ citation needed ] Naturally, this is not to say that the roles that gene deserts play are inessential or unimportant, rather than their functions may include buffering effects. An example of essential gene deserts with inessential DNA content are the telomeres that protect the ends of genomes. Telomeres can be categorized as true gene deserts, given that they solely contain repeats of TTAGGG (in humans) and do not have apparent protein-coding functions. Without these telomeres, human genomes would be severely mutated within a fixed number of cell cycles. On the other hand, since telomeres do not code for proteins, their loss ensures that there is no effect in important processes. Therefore, the term “junk” DNA should no longer be applied to any region of the genome; every portion of the genome should play a role in protecting, regulating, or repairing the protein coding regions that determine the functions of life. Although there is still much to learn about the nooks and crannies of the immense (yet limited) human genome, with the aid of various new technologies and the synthesis of the full human genome, we may perhaps unravel a great collection of secrets in the approaching years about the marvels of our genetic code. | https://en.wikipedia.org/wiki/Gene_desert |
A gene drive is a natural process [ 1 ] and technology of genetic engineering that propagates a particular suite of genes throughout a population [ 2 ] by altering the probability that a specific allele will be transmitted to offspring (instead of the Mendelian 50% probability). Gene drives can arise through a variety of mechanisms. [ 3 ] [ 4 ] They have been proposed to provide an effective means of genetically modifying specific populations and entire species.
The technique can employ adding, deleting, disrupting, or modifying genes. [ 5 ] [ 6 ]
Proposed applications include exterminating insects that carry pathogens (notably mosquitoes that transmit malaria , dengue , and zika pathogens), controlling invasive species , or eliminating herbicide or pesticide resistance . [ 7 ] [ 5 ] [ 8 ] [ 9 ]
As with any potentially powerful technique, gene drives can be misused in a variety of ways or induce unintended consequences . For example, a gene drive intended to affect only a local population might spread across an entire species. Gene drives that eradicate populations of invasive species in their non-native habitats may have consequences for the population of the species as a whole, even in its native habitat. Any accidental return of individuals of the species to its original habitats, through natural migration, environmental disruption (storms, floods, etc.), accidental human transportation, or purposeful relocation, could unintentionally drive the species to extinction if the relocated individuals carried harmful gene drives. [ 10 ]
Gene drives can be built from many naturally occurring selfish genetic elements that use a variety of molecular mechanisms. [ 3 ] These naturally occurring mechanisms induce similar segregation distortion in the wild, arising when alleles evolve molecular mechanisms that give them a transmission chance greater than the normal 50%.
Most gene drives have been developed in insects, notably mosquitoes, as a way to control insect-borne pathogens. Recent developments designed gene drives directly in viruses, notably herpesviruses . These viral gene drives can propagate a modification into the population of viruses, and aim to reduce the infectivity of the virus. [ 11 ] [ 12 ]
In sexually-reproducing species, most genes are present in two copies (which can be the same or different alleles ), either one of which has a 50% chance of passing to a descendant. By biasing the inheritance of particular altered genes, synthetic gene drives could more effectively spread alterations through a population. [ 5 ] [ 6 ]
Typically, scientists insert the gene drive into an organism's DNA along with the CRISPR-Cas9 machinery. When the modified organism mates and its DNA mixes with that of its mate, the CRISPR-Cas9 tool cuts the partner's DNA at the same spot where the gene drive is located in the first organism. The cell repairs the cut DNA by copying the gene drive from the first organism into the corresponding spot in the DNA of the offspring. This means both copies of the gene (one from each parent) now contain the gene drive.
At the molecular level, an endonuclease gene drive works by cutting a chromosome at a specific site that does not encode the drive, inducing the cell to repair the damage by copying the drive sequence onto the damaged chromosome. The cell then has two copies of the drive sequence. The method derives from genome editing techniques and relies on homologous recombination . To achieve this behavior, endonuclease gene drives consist of two nested elements:
As a result, the gene drive insertion in the genome will re-occur in each organism that inherits one copy of the modification and one copy of the wild-type gene. If the gene drive is already present in the egg cell (e.g. when received from one parent), all the gametes of the individual will carry the gene drive (instead of 50% in the case of a normal gene). [ 5 ]
Since it can never more than double in frequency with each generation, a gene drive introduced in a single individual typically requires dozens of generations to affect a substantial fraction of a population. Alternatively, releasing drive-containing organisms in sufficient numbers can affect the rest within a few generations; for instance, by introducing it in every thousandth individual, it takes only 12–15 generations to be present in all individuals. [ 16 ] Whether a gene drive will ultimately become fixed in a population and at which speed depends on its effect on individual fitness, on the rate of allele conversion, and on the population structure. In a well mixed population and with realistic allele conversion frequencies (≈90%), population genetics predicts that gene drives get fixed for a selection coefficient smaller than 0.3; [ 16 ] in other words, gene drives can be used to spread modifications as long as reproductive success is not reduced by more than 30%. This is in contrast with normal genes, which can only spread across large populations if they increase fitness.
Because the strategy usually relies on the simultaneous presence of an unmodified and a gene drive allele in the same cell nucleus , it had generally been assumed that a gene drive could only be engineered in sexually reproducing organisms, excluding bacteria and viruses . However, during a viral infection , viruses can accumulate hundreds or thousands of genome copies in infected cells. Cells are frequently co-infected by multiple virions and recombination between viral genomes is a well-known and widespread source of diversity for many viruses. In particular, herpesviruses are nuclear-replicating DNA viruses with large double-stranded DNA genomes and frequently undergo homologous recombination during their replication cycle.
These properties have enabled the design of a gene drive strategy that doesn't involve sexual reproduction, instead relying on co-infection of a given cell by a naturally occurring and an engineered virus. Upon co-infection, the unmodified genome is cut and repaired by homologous recombination, producing new gene drive viruses that can progressively replace the naturally occurring population. In cell culture experiments, it was shown that a viral gene drive can spread into the viral population and strongly reduce the infectivity of the virus, which opens novel therapeutic strategies against herpesviruses. [ 11 ]
Because gene drives propagate by replacing other alleles that contain a cutting site and the corresponding homologies, their application has been mostly limited to sexually reproducing species (because they are diploid or polyploid and alleles are mixed at each generation). As a side effect, inbreeding could in principle be an escape mechanism, but the extent to which this can happen in practice is difficult to evaluate. [ 17 ]
Due to the number of generations required for a gene drive to affect an entire population, the time to universality varies according to the reproductive cycle of each species: it may require under a year for some invertebrates, but centuries for organisms with years-long intervals between birth and sexual maturity , such as humans. [ 18 ] Hence this technology is of most use in fast-reproducing species.
Effectiveness in real practice varies between techniques, especially by choice of germline promoter. Lin and Potter 2016 (a) discloses the promoter technology homology assisted CRISPR knockin (HACK) and Lin and Potter 2016 (b) demonstrates actual use, achieving a high proportion of altered progeny from each altered Drosophila mother. [ 19 ]
Issues highlighted by researchers include: [ 20 ]
The Broad Institute of MIT and Harvard added gene drives to a list of uses of gene-editing technology it doesn't think companies should pursue. [ 21 ] [ better source needed ]
Gene drives affect all future generations and represent the possibility of a larger change in a living species than has been possible before. [ 22 ]
In December 2015, scientists of major world academies called for a moratorium on inheritable human genome edits that would affect the germline, including those related to CRISPR-Cas9 technologies, [ 23 ] but supported continued basic research and gene editing that would not affect future generations. [ 24 ] In February 2016, British scientists were given permission by regulators to genetically modify human embryos by using CRISPR-Cas9 and related techniques on condition that the embryos were destroyed in seven days. [ 25 ] [ 26 ] In June 2016, the US National Academies of Sciences, Engineering, and Medicine released a report on their "Recommendations for Responsible Conduct" of gene drives. [ 27 ]
A 2018 mathematical modelling studies suggest that despite preexisting and evolving gene drive resistance (caused by mutations at the cutting site), even an inefficient CRISPR "alteration-type" gene drive can achieve fixation in small populations. With a small but non-zero amount of gene flow among many local populations, the gene drive can escape and convert outside populations as well. [ 28 ]
Kevin M. Esvelt stated that an open conversation was needed around the safety of gene drives: "In our view, it is wise to assume that invasive and self-propagating gene drive systems are likely to spread to every population of the target species throughout the world. Accordingly, they should only be built to combat true plagues such as malaria, for which we have few adequate countermeasures and that offer a realistic path towards an international agreement to deploy among all affected nations.". [ 29 ] He moved to an open model for his own research on using gene drives to eradicate Lyme disease in Nantucket and Martha's Vineyard . [ 30 ] Esvelt and colleagues suggested that CRISPR could be used to save endangered wildlife from extinction. Esvelt later retracted his support for the idea, except for extremely hazardous populations such as malaria-carrying mosquitoes, and isolated islands that would prevent the drive from spreading beyond the target area. [ 31 ]
Austin Burt, an evolutionary geneticist at Imperial College London , introduced the possibility of conducting gene drives based on natural homing endonuclease selfish genetic elements in 2003. [ 6 ]
Researchers had already shown that such genes could act selfishly to spread rapidly over successive generations. Burt suggested that gene drives might be used to prevent a mosquito population from transmitting the malaria parasite or to crash a mosquito population. Gene drives based on homing endonucleases have been demonstrated in the laboratory in transgenic populations of mosquitoes [ 32 ] and fruit flies. [ 33 ] [ 34 ] However, homing endonucleases are sequence-specific. Altering their specificity to target other sequences of interest remains a major challenge. [ 3 ] The possible applications of gene drive remained limited until the discovery of CRISPR and associated RNA-guided endonucleases such as Cas9 and Cas12a .
In June 2014, the World Health Organization (WHO) Special Programme for Research and Training in Tropical Diseases [ 35 ] issued guidelines [ 36 ] for evaluating genetically modified mosquitoes. In 2013 the European Food Safety Authority issued a protocol [ 37 ] for environmental assessments of all genetically modified organisms .
Target Malaria , a project funded by the Bill and Melinda Gates Foundation , invested $75 million in gene drive technology. The foundation originally estimated the technology to be ready for field use by 2029 somewhere in Africa. However, in 2016 Gates changed this estimate to some time within the following two years. [ 38 ] In December 2017, documents released under the Freedom of Information Act showed that DARPA had invested $100 million in gene drive research. [ 39 ]
Scientists have designed multiple strategies to maintain control over gene drives. [ citation needed ]
In 2020, researchers reported the development of two active guide RNA -only elements that, according to their study, may enable halting or deleting gene drives introduced into populations in the wild with CRISPR-Cas9 gene editing . The paper's senior author cautions that the two neutralizing systems they demonstrated in cage trials "should not be used with a false sense of security for field-implemented gene drives". [ 40 ] [ 41 ]
If elimination is not necessary, it may be desirable to intentionally preserve the target population at a lower level by using a less severe gene drive technology. This works by maintaining the semi-defective population indefinitely in the target area, thereby crowding out potential nearby, wild populations that would otherwise move back in to fill a void. [ 42 ]
CRISPR [ 43 ] is the leading genetic engineering method. [ 44 ] In 2014, Esvelt and coworkers first suggested that CRISPR/Cas9 might be used to build gene drives. [ 5 ] In 2015, researchers reported successful engineering of CRISPR-based gene drives in Saccharomyces [ 45 ] , Drosophila , [ 46 ] and mosquitoes . [ 47 ] [ 48 ] They reported efficient inheritance distortion over successive generations, with one study demonstrating the spread of a gene into laboratory populations. [ 48 ] Drive-resistant alleles were expected to arise for each of the described gene drives; however, this could be delayed or prevented by targeting highly conserved sites at which resistance was expected to have a severe fitness cost.
Because of CRISPR's targeting flexibility, gene drives could theoretically be used to engineer almost any trait. Unlike previous approaches, they could be tailored to block the evolution of drive resistance by targeting multiple sequences. CRISPR could also enable gene drive architectures that control rather than eliminate populations. [ citation needed ]
In 2022, t-CRISPR, was used to pass the “t haplotype” gene to about 95% of offspring. The approach spreads faulty copies of a female fertility gene to offspring, rendering them infertile. The researchers reported that their models suggested that adding 256 altered animals to an island with a population of 200,000 mice would eliminate the population in about 25 years. The traditional approaches of poison and traps were not needed. [ 49 ]
Gene drives have two main classes of application, which have implications of different significance:
Because of their unprecedented potential risk, safeguard mechanisms have been proposed and tested. [ 45 ] [ 50 ]
One possible application is to genetically modify mosquitoes , mice , and other disease vectors so that they cannot transmit diseases, such as malaria and dengue fever in the case of mosquitoes, and tick-borne disease in the case of mice. [ 51 ] Researchers have claimed that by applying the technique to 1% of the wild population of mosquitoes, that they could eradicate malaria within a year. [ 52 ]
A gene drive could be used to eliminate invasive species and has, for example, been proposed as a way to eliminate invasive species in New Zealand . [ 53 ] Gene drives for biodiversity conservation purposes are being explored as part of The Genetic Biocontrol of Invasive Rodents (GBIRd) program because they offer the potential for reduced risk to non-target species and reduced costs when compared to traditional invasive species removal techniques. Given the risks of such an approach described below, the GBIRd partnership is committed to a deliberate, step-wise process that will only proceed with public alignment, as recommended by the world's leading gene drive researchers from the Australian and US National Academy of
Sciences and many others. [ 54 ] A wider outreach network for gene drive research exists to raise awareness of the value of gene drive research for the public good. [ 55 ]
Some scientists are concerned about the technique, fearing it could spread and wipe out species in native habitats. [ 56 ] The gene could mutate, potentially causing unforeseen problems (as could any gene). [ 57 ] Many non-native species can hybridize with native species, such that a gene drive afflicting a non-native plant or animal that hybridizes with a native species could doom the native species. Many non-native species have naturalized into their new environment so well that crops and/or native species have adapted to depend on them. [ 58 ]
The Predator Free 2050 project is a New Zealand government program to eliminate eight invasive mammalian predator species (including rats, short-tailed weasels, and possums) from the country by 2050. [ 59 ] [ 60 ] The project was first announced in 2016 by New Zealand's prime minister John Key and in January 2017 it was announced that gene drives would be considered in the effort, but this has not yet been actualised. [ 60 ] In 2017, one group in Australia and another in Texas released preliminary research into creating 'daughterless mice' using gene drives in mammals. [ 61 ]
In 2017, scientists at the University of California, Riverside developed a gene drive to attack the invasive spotted-wing drosophila , a type of fruit fly native to Asia that is costing California's cherry farms $700 million per year because of its tail's razor-edged ovipositor that destroys unblemished fruit. The primary alternative control strategy involves the use of insecticides called pyrethroids that kill almost all insects that it contacts. [ 21 ]
The transhumanist philosopher David Pearce has advocated for using CRISPR-based gene drives to reduce the suffering of wild animals . [ 62 ] Kevin M. Esvelt , an American biologist who has helped develop gene drive technology, has argued that there is a moral case for the elimination of the New World screwworm through such technologies because of the immense suffering that infested wild animals experience when they are eaten alive. [ 63 ] | https://en.wikipedia.org/wiki/Gene_drive |
Gene duplication (or chromosomal duplication or gene amplification ) is a major mechanism through which new genetic material is generated during molecular evolution . It can be defined as any duplication of a region of DNA that contains a gene . Gene duplications can arise as products of several types of errors in DNA replication and repair machinery as well as through fortuitous capture by selfish genetic elements. Common sources of gene duplications include ectopic recombination , retrotransposition event, aneuploidy , polyploidy , and replication slippage . [ 1 ]
Duplications arise from an event termed unequal crossing-over that occurs during meiosis between misaligned homologous chromosomes. The chance of it happening is a function of the degree of sharing of repetitive elements between two chromosomes. The products of this recombination are a duplication at the site of the exchange and a reciprocal deletion. Ectopic recombination is typically mediated by sequence similarity at the duplicate breakpoints, which form direct repeats. Repetitive genetic elements such as transposable elements offer one source of repetitive DNA that can facilitate recombination, and they are often found at duplication breakpoints in plants and mammals. [ 2 ]
Replication slippage is an error in DNA replication that can produce duplications of short genetic sequences. During replication DNA polymerase begins to copy the DNA. At some point during the replication process, the polymerase dissociates from the DNA and replication stalls. When the polymerase reattaches to the DNA strand, it aligns the replicating strand to an incorrect position and incidentally copies the same section more than once. Replication slippage is also often facilitated by repetitive sequences, but requires only a few bases of similarity. [ citation needed ]
Retrotransposons , mainly L1 , can occasionally act on cellular mRNA. Transcripts are reverse transcribed to DNA and inserted into random place in the genome, creating retrogenes. Resulting sequence usually lack introns and often contain poly(A) sequences that are also integrated into the genome. Many retrogenes display changes in gene regulation in comparison to their parental gene sequences, which sometimes results in novel functions. Retrogenes can move between different chromosomes to shape chromosomal evolution. [ 3 ]
Aneuploidy occurs when nondisjunction at a single chromosome results in an abnormal number of chromosomes. Aneuploidy is often harmful and in mammals regularly leads to spontaneous abortions (miscarriages). Some aneuploid individuals are viable, for example trisomy 21 in humans, which leads to Down syndrome . Aneuploidy often alters gene dosage in ways that are detrimental to the organism; therefore, it is unlikely to spread through populations.
Polyploidy , or whole genome duplication is a product of nondisjunction during meiosis which results in additional copies of the entire genome. Polyploidy is common in plants, but it has also occurred in animals, with two rounds of whole genome duplication ( 2R event ) in the vertebrate lineage leading to humans. [ 4 ] It has also occurred in the hemiascomycete yeasts ~100 mya. [ 5 ] [ 6 ]
After a whole genome duplication, there is a relatively short period of genome instability, extensive gene loss, elevated levels of nucleotide substitution and regulatory network rewiring. [ 7 ] [ 8 ] In addition, gene dosage effects play a significant role. [ 9 ] Thus, most duplicates are lost within a short period, however, a considerable fraction of duplicates survive. [ 10 ] Interestingly, genes involved in regulation are preferentially retained. [ 11 ] [ 12 ] Furthermore, retention of regulatory genes, most notably the Hox genes , has led to adaptive innovation.
Rapid evolution and functional divergence have been observed at the level of the transcription of duplicated genes, usually by point mutations in short transcription factor binding motifs. [ 13 ] [ 14 ] Furthermore, rapid evolution of protein phosphorylation motifs, usually embedded within rapidly evolving intrinsically disordered regions is another contributing factor for survival and rapid adaptation/neofunctionalization of duplicate genes. [ 15 ] Thus, a link seems to exist between gene regulation (at least at the post-translational level) and genome evolution. [ 15 ]
Polyploidy is also a well known source of speciation, as offspring, which have different numbers of chromosomes compared to parent species, are often unable to interbreed with non-polyploid organisms. Whole genome duplications are thought to be less detrimental than aneuploidy as the relative dosage of individual genes should be the same.
Comparisons of genomes demonstrate that gene duplications are common in most species investigated. This is indicated by variable copy numbers ( copy number variation ) in the genome of humans [ 16 ] [ 17 ] or fruit flies. [ 18 ] However, it has been difficult to measure the rate at which such duplications occur. Recent studies yielded a first direct estimate of the genome-wide rate of gene duplication in C. elegans , the first multicellular eukaryote for which such as estimate became available. The gene duplication rate in C. elegans is on the order of 10 −7 duplications/gene/generation, that is, in a population of 10 million worms, one will have a gene duplication per generation. This rate is two orders of magnitude greater than the spontaneous rate of point mutation per nucleotide site in this species. [ 19 ] Older (indirect) studies reported locus-specific duplication rates in bacteria, Drosophila , and humans ranging from 10 −3 to 10 −7 /gene/generation. [ 20 ] [ 21 ] [ 22 ]
Gene duplications are an essential source of genetic novelty that can lead to evolutionary innovation. Duplication creates genetic redundancy, where the second copy of the gene is often free from selective pressure —that is, mutations of it have no deleterious effects to its host organism. If one copy of a gene experiences a mutation that affects its original function, the second copy can serve as a 'spare part' and continue to function correctly. Thus, duplicate genes accumulate mutations faster than a functional single-copy gene, over generations of organisms, and it is possible for one of the two copies to develop a new and different function. Some examples of such neofunctionalization is the apparent mutation of a duplicated digestive gene in a family of ice fish into an antifreeze gene and duplication leading to a novel snake venom gene [ 23 ] and the synthesis of 1 beta-hydroxytestosterone in pigs. [ 24 ]
Gene duplication is believed to play a major role in evolution ; this stance has been held by members of the scientific community for over 100 years. [ 25 ] Susumu Ohno was one of the most famous developers of this theory in his classic book Evolution by gene duplication (1970). [ 26 ] Ohno argued that gene duplication is the most important evolutionary force since the emergence of the universal common ancestor . [ 27 ] Major genome duplication events can be quite common. It is believed that the entire yeast genome underwent duplication about 100 million years ago. [ 28 ] Plants are the most prolific genome duplicators. For example, wheat is hexaploid (a kind of polyploid ), meaning that it has six copies of its genome.
Another possible fate for duplicate genes is that both copies are equally free to accumulate degenerative mutations, so long as any defects are complemented by the other copy. This leads to a neutral " subfunctionalization " (a process of constructive neutral evolution ) or DDC (duplication-degeneration-complementation) model, [ 29 ] [ 30 ] in which the functionality of the original gene is distributed among the two copies. Neither gene can be lost, as both now perform important non-redundant functions, but ultimately neither is able to achieve novel functionality.
Subfunctionalization can occur through neutral processes in which mutations accumulate with no detrimental or beneficial effects. However, in some cases subfunctionalization can occur with clear adaptive benefits. If an ancestral gene is pleiotropic and performs two functions, often neither one of these two functions can be changed without affecting the other function. In this way, partitioning the ancestral functions into two separate genes can allow for adaptive specialization of subfunctions, thereby providing an adaptive benefit. [ 31 ]
Often the resulting genomic variation leads to gene dosage dependent neurological disorders such as Rett-like syndrome and Pelizaeus–Merzbacher disease . [ 32 ] Such detrimental mutations are likely to be lost from the population and will not be preserved or develop novel functions. However, many duplications are, in fact, not detrimental or beneficial, and these neutral sequences may be lost or may spread through the population through random fluctuations via genetic drift .
The two genes that exist after a gene duplication event are called paralogs and usually code for proteins with a similar function and/or structure. By contrast, orthologous genes present in different species which are each originally derived from the same ancestral sequence. (See Homology of sequences in genetics ).
It is important (but often difficult) to differentiate between paralogs and orthologs in biological research. Experiments on human gene function can often be carried out on other species if a homolog to a human gene can be found in the genome of that species, but only if the homolog is orthologous. If they are paralogs and resulted from a gene duplication event, their functions are likely to be too different. One or more copies of duplicated genes that constitute a gene family may be affected by insertion of transposable elements that causes significant variation between them in their sequence and finally may become responsible for divergent evolution . This may also render the chances and the rate of gene conversion between the homologs of gene duplicates due to less or no similarity in their sequences.
Paralogs can be identified in single genomes through a sequence comparison of all annotated gene models to one another. Such a comparison can be performed on translated amino acid sequences (e.g. BLASTp, tBLASTx) to identify ancient duplications or on DNA nucleotide sequences (e.g. BLASTn, megablast) to identify more recent duplications. Most studies to identify gene duplications require reciprocal-best-hits or fuzzy reciprocal-best-hits, where each paralog must be the other's single best match in a sequence comparison. [ 33 ]
Most gene duplications exist as low copy repeats (LCRs), rather highly repetitive sequences like transposable elements. They are mostly found in pericentronomic , subtelomeric and interstitial regions of a chromosome. Many LCRs, due to their size (>1Kb), similarity, and orientation, are highly susceptible to duplications and deletions.
Technologies such as genomic microarrays , also called array comparative genomic hybridization (array CGH), are used to detect chromosomal abnormalities, such as microduplications, in a high throughput fashion from genomic DNA samples. In particular, DNA microarray technology can simultaneously monitor the expression levels of thousands of genes across many treatments or experimental conditions, greatly facilitating the evolutionary studies of gene regulation after gene duplication or speciation . [ 34 ] [ 35 ]
Gene duplications can also be identified through the use of next-generation sequencing platforms. The simplest means to identify duplications in genomic resequencing data is through the use of paired-end sequencing reads. Tandem duplications are indicated by sequencing read pairs which map in abnormal orientations. Through a combination of increased sequence coverage and abnormal mapping orientation, it is possible to identify duplications in genomic sequencing data.
The International System for Human Cytogenomic Nomenclature (ISCN) is an international standard for human chromosome nomenclature , which includes band names, symbols and abbreviated terms used in the description of human chromosome and chromosome abnormalities. Abbreviations include dup for duplications of parts of a chromosome. [ 36 ] For example, dup(17p12) causes Charcot–Marie–Tooth disease type 1A. [ 37 ]
Gene duplication does not necessarily constitute a lasting change in a species' genome. In fact, such changes often don't last past the initial host organism. From the perspective of molecular genetics , gene amplification is one of many ways in which a gene can be overexpressed . Genetic amplification can occur artificially, as with the use of the polymerase chain reaction technique to amplify short strands of DNA in vitro using enzymes , or it can occur naturally, as described above. If it's a natural duplication, it can still take place in a somatic cell , rather than a germline cell (which would be necessary for a lasting evolutionary change).
Duplications of oncogenes are a common cause of many types of cancer . In such cases the genetic duplication occurs in a somatic cell and affects only the genome of the cancer cells themselves, not the entire organism, much less any subsequent offspring. Recent comprehensive patient-level classification and quantification of driver events in TCGA cohorts revealed that there are on average 12 driver events per tumor, of which 1.5 are amplifications of oncogenes. [ 38 ]
Whole-genome duplications are also frequent in cancers, detected in 30% to 36% of tumors from the most common cancer types. [ 40 ] [ 41 ] Their exact role in carcinogenesis is unclear, but they in some cases lead to loss of chromatin segregation leading to chromatin conformation changes that in turn lead to oncogenic epigenetic and transcriptional modifications. [ 42 ] | https://en.wikipedia.org/wiki/Gene_duplication |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.