text
stringlengths
2
132k
source
dict
0 , {\displaystyle {\begin{aligned}\Delta j&=0,\\[1.2ex]\Delta \left(vj^{2}+p\right)&=0,\\[1.2ex]\Delta h^{t}&=0,\end{aligned}}} where h t {\displaystyle h^{t}} is the specific total enthalpy. These are the usually expressed in the convective variables: Δ j = 0 , Δ ( u 2 v + p ) = 0 , Δ ( e + 1 2 u 2 + p v ) = 0 , {\displaystyle {\begin{aligned}\Delta j&=0,\\[1.2ex]\Delta \left({\frac {u^{2}}{v}}+p\right)&=0,\\[1.2ex]\Delta \left(e+{\frac {1}{2}}u^{2}+pv\right)&=0,\end{aligned}}} where: u {\displaystyle u} is the flow speed e {\displaystyle e} is the specific internal energy. The energy equation is an integral form of the Bernoulli equation in the compressible case. The former mass and momentum equations by substitution lead to the Rayleigh equation: Δ p Δ v = − u 0 2 v 0 . {\displaystyle {\frac {\Delta p}{\Delta v}}=-{\frac {u_{0}^{2}}{v_{0}}}.} Since the second term is a constant, the Rayleigh equation always describes a simple line in the pressure volume plane not dependent of any equation of state, i.e. the Rayleigh line. By substitution in the Rankine–Hugoniot equations, that can be also made explicit as: ρ u = ρ 0 u 0 , ρ u 2 + p = ρ 0 u 0 2 + p 0 , e + 1 2 u 2 + p ρ = e 0 + 1 2 u 0 2 + p 0 ρ 0 . {\displaystyle {\begin{aligned}\rho u&=\rho _{0}u_{0},\\[1.2ex]\rho u^{2}+p&=\rho _{0}u_{0}^{2}+p_{0},\\[1.2ex]e+{\frac {1}{2}}u^{2}+{\frac {p}{\rho }}&=e_{0}+{\frac {1}{2}}u_{0}^{2}+{\frac {p_{0}}{\rho _{0}}}.\end{aligned}}} One can also obtain the kinetic equation and to the Hugoniot equation. The analytical passages are not shown here for brevity. These are respectively: u 2 ( v , p ) = u 0 2 + ( p − p 0 ) ( v 0 + v ) , e ( v , p ) = e 0 + 1 2 ( p + p 0 ) ( v 0 −
{ "page_id": 396022, "source": null, "title": "Euler equations (fluid dynamics)" }
v ) . {\displaystyle {\begin{aligned}u^{2}(v,p)&=u_{0}^{2}+(p-p_{0})(v_{0}+v),\\[1.2ex]e(v,p)&=e_{0}+{\tfrac {1}{2}}(p+p_{0})(v_{0}-v).\end{aligned}}} The Hugoniot equation, coupled with the fundamental equation of state of the material: e = e ( v , p ) , {\displaystyle e=e(v,p),} describes in general in the pressure volume plane a curve passing by the conditions (v0, p0), i.e. the Hugoniot curve, whose shape strongly depends on the type of material considered. It is also customary to define a Hugoniot function: h ( v , s ) ≡ e ( v , s ) − e 0 + 1 2 ( p ( v , s ) + p 0 ) ( v − v 0 ) , {\displaystyle {\mathfrak {h}}(v,s)\equiv e(v,s)-e_{0}+{\tfrac {1}{2}}(p(v,s)+p_{0})(v-v_{0}),} allowing to quantify deviations from the Hugoniot equation, similarly to the previous definition of the hydraulic head, useful for the deviations from the Bernoulli equation. === Finite volume form === On the other hand, by integrating a generic conservation equation: ∂ y ∂ t + ∇ ⋅ F = s , {\displaystyle {\frac {\partial \mathbf {y} }{\partial t}}+\nabla \cdot \mathbf {F} =\mathbf {s} ,} on a fixed volume Vm, and then basing on the divergence theorem, it becomes: d d t ∫ V m y d V + ∮ ∂ V m F ⋅ n ^ d s = S . {\displaystyle {\frac {d}{dt}}\int _{V_{m}}\mathbf {y} dV+\oint _{\partial V_{m}}\mathbf {F} \cdot {\hat {n}}ds=\mathbf {S} .} By integrating this equation also over a time interval: ∫ V m y ( r , t n + 1 ) d V − ∫ V m y ( r , t n ) d V + ∫ t n t n + 1 ∮ ∂ V m F ⋅ n ^ d s d t = 0 . {\displaystyle \int _{V_{m}}\mathbf {y} (\mathbf {r} ,t_{n+1})\,dV-\int _{V_{m}}\mathbf {y} (\mathbf {r} ,t_{n})\,dV+\int _{t_{n}}^{t_{n+1}}\oint _{\partial V_{m}}\mathbf
{ "page_id": 396022, "source": null, "title": "Euler equations (fluid dynamics)" }
{F} \cdot {\hat {n}}\,ds\,dt=\mathbf {0} .} Now by defining the node conserved quantity: y m , n ≡ 1 V m ∫ V m y ( r , t n ) d V , {\displaystyle \mathbf {y} _{m,n}\equiv {\frac {1}{V_{m}}}\int _{V_{m}}\mathbf {y} (\mathbf {r} ,t_{n})\,dV,} we deduce the finite volume form: y m , n + 1 = y m , n − 1 V m ∫ t n t n + 1 ∮ ∂ V m F ⋅ n ^ d s d t . {\displaystyle \mathbf {y} _{m,n+1}=\mathbf {y} _{m,n}-{\frac {1}{V_{m}}}\int _{t_{n}}^{t_{n+1}}\oint _{\partial V_{m}}\mathbf {F} \cdot {\hat {n}}\,ds\,dt.} In particular, for Euler equations, once the conserved quantities have been determined, the convective variables are deduced by back substitution: u m , n = j m , n ρ m , n , e m , n = E m , n t ρ m , n − 1 2 u m , n 2 . {\displaystyle {\begin{aligned}\displaystyle \mathbf {u} _{m,n}&={\frac {\mathbf {j} _{m,n}}{\rho _{m,n}}},\\[1.2ex]\displaystyle e_{m,n}&={\frac {E_{m,n}^{t}}{\rho _{m,n}}}-{\frac {1}{2}}u_{m,n}^{2}.\end{aligned}}} Then the explicit finite volume expressions of the original convective variables are: == Constraints == It has been shown that Euler equations are not a complete set of equations, but they require some additional constraints to admit a unique solution: these are the equation of state of the material considered. To be consistent with thermodynamics these equations of state should satisfy the two laws of thermodynamics. On the other hand, by definition non-equilibrium system are described by laws lying outside these laws. In the following we list some very simple equations of state and the corresponding influence on Euler equations. === Ideal polytropic gas === For an ideal polytropic gas the fundamental equation of state is: e ( v , s ) = e 0 e ( γ − 1
{ "page_id": 396022, "source": null, "title": "Euler equations (fluid dynamics)" }
) m ( s − s 0 ) ( v 0 v ) γ − 1 , {\displaystyle e(v,s)=e_{0}e^{(\gamma -1)m\left(s-s_{0}\right)}\left({v_{0} \over v}\right)^{\gamma -1},} where e {\displaystyle e} is the specific energy, v {\displaystyle v} is the specific volume, s {\displaystyle s} is the specific entropy, m {\displaystyle m} is the molecular mass, γ {\displaystyle \gamma } here is considered a constant (polytropic process), and can be shown to correspond to the heat capacity ratio. This equation can be shown to be consistent with the usual equations of state employed by thermodynamics. From this equation one can derive the equation for pressure by its thermodynamic definition: p ( v , e ) ≡ − ∂ e ∂ v = ( γ − 1 ) e v . {\displaystyle p(v,e)\equiv -{\partial e \over \partial v}=(\gamma -1){\frac {e}{v}}.} By inverting it one arrives to the mechanical equation of state: e ( v , p ) = p v γ − 1 . {\displaystyle e(v,p)={\frac {pv}{\gamma -1}}.} Then for an ideal gas the compressible Euler equations can be simply expressed in the mechanical or primitive variables specific volume, flow velocity and pressure, by taking the set of the equations for a thermodynamic system and modifying the energy equation into a pressure equation through this mechanical equation of state. At last, in convective form they result: and in one-dimensional quasilinear form they results: ∂ y ∂ t + A ∂ y ∂ x = 0 . {\displaystyle {\frac {\partial \mathbf {y} }{\partial t}}+\mathbf {A} {\frac {\partial \mathbf {y} }{\partial x}}={\mathbf {0} }.} where the conservative vector variable is: y = ( v u p ) , {\displaystyle {\mathbf {y} }={\begin{pmatrix}v\\u\\p\end{pmatrix}},} and the corresponding jacobian matrix is: A = ( u − v 0 0 u v 0 γ p u ) . {\displaystyle {\mathbf {A}
{ "page_id": 396022, "source": null, "title": "Euler equations (fluid dynamics)" }
}={\begin{pmatrix}u&-v&0\\0&u&v\\0&\gamma p&u\end{pmatrix}}.} === Steady flow in material coordinates === In the case of steady flow, it is convenient to choose the Frenet–Serret frame along a streamline as the coordinate system for describing the steady momentum Euler equation: u ⋅ ∇ u = − 1 ρ ∇ p , {\displaystyle {\boldsymbol {u}}\cdot \nabla {\boldsymbol {u}}=-{\frac {1}{\rho }}\nabla p,} where u {\displaystyle \mathbf {u} } , p {\displaystyle p} and ρ {\displaystyle \rho } denote the flow velocity, the pressure and the density, respectively. Let { e s , e n , e b } {\displaystyle \left\{\mathbf {e} _{s},\mathbf {e} _{n},\mathbf {e} _{b}\right\}} be a Frenet–Serret orthonormal basis which consists of a tangential unit vector, a normal unit vector, and a binormal unit vector to the streamline, respectively. Since a streamline is a curve that is tangent to the velocity vector of the flow, the left-hand side of the above equation, the convective derivative of velocity, can be described as follows: u ⋅ ∇ u = u ∂ ∂ s ( u e s ) = u ∂ u ∂ s e s + u 2 R e n , {\displaystyle {\boldsymbol {u}}\cdot \nabla {\boldsymbol {u}}=u{\frac {\partial }{\partial s}}(u{\boldsymbol {e}}_{s})=u{\frac {\partial u}{\partial s}}{\boldsymbol {e}}_{s}+{\frac {u^{2}}{R}}{\boldsymbol {e}}_{n},} where u = u e s , ∂ ∂ s ≡ e s ⋅ ∇ , ∂ e s ∂ s = 1 R e n , {\displaystyle {\begin{aligned}{\boldsymbol {u}}&=u{\boldsymbol {e}}_{s},\\{\frac {\partial }{\partial s}}&\equiv {\boldsymbol {e}}_{s}\cdot \nabla ,\\{\frac {\partial {\boldsymbol {e}}_{s}}{\partial s}}&={\frac {1}{R}}{\boldsymbol {e}}_{n},\end{aligned}}} and R {\displaystyle R} is the radius of curvature of the streamline. Therefore, the momentum part of the Euler equations for a steady flow is found to have a simple form: u ∂ u ∂ s = − 1 ρ ∂ p ∂ s , u 2 R = − 1 ρ
{ "page_id": 396022, "source": null, "title": "Euler equations (fluid dynamics)" }
∂ p ∂ n ( ∂ / ∂ n ≡ e n ⋅ ∇ ) , 0 = − 1 ρ ∂ p ∂ b ( ∂ / ∂ b ≡ e b ⋅ ∇ ) . {\displaystyle {\begin{aligned}\displaystyle u{\frac {\partial u}{\partial s}}&=-{\frac {1}{\rho }}{\frac {\partial p}{\partial s}},\\\displaystyle {u^{2} \over R}&=-{\frac {1}{\rho }}{\frac {\partial p}{\partial n}}&({\partial /\partial n}\equiv {\boldsymbol {e}}_{n}\cdot \nabla ),\\\displaystyle 0&=-{\frac {1}{\rho }}{\frac {\partial p}{\partial b}}&({\partial /\partial b}\equiv {\boldsymbol {e}}_{b}\cdot \nabla ).\end{aligned}}} For barotropic flow ( ρ = ρ ( p ) ) {\displaystyle (\rho =\rho (p))} , Bernoulli's equation is derived from the first equation: ∂ ∂ s ( u 2 2 + ∫ d p ρ ) = 0. {\displaystyle {\frac {\partial }{\partial s}}\left({\frac {u^{2}}{2}}+\int {\frac {\mathrm {d} p}{\rho }}\right)=0.} The second equation expresses that, in the case the streamline is curved, there should exist a pressure gradient normal to the streamline because the centripetal acceleration of the fluid parcel is only generated by the normal pressure gradient. The third equation expresses that pressure is constant along the binormal axis. ==== Streamline curvature theorem ==== Let r {\displaystyle r} be the distance from the center of curvature of the streamline, then the second equation is written as follows: ∂ p ∂ r = ρ u 2 r ( > 0 ) , {\displaystyle {\frac {\partial p}{\partial r}}=\rho {\frac {u^{2}}{r}}~(>0),} where ∂ / ∂ r = − ∂ / ∂ n . {\displaystyle {\partial /\partial r}=-{\partial /\partial n}.} This equation states: In a steady flow of an inviscid fluid without external forces, the center of curvature of the streamline lies in the direction of decreasing radial pressure. Although this relationship between the pressure field and flow curvature is very useful, it doesn't have a name in the English-language scientific literature. Japanese fluid-dynamicists call the relationship the "Streamline
{ "page_id": 396022, "source": null, "title": "Euler equations (fluid dynamics)" }
curvature theorem". This "theorem" explains clearly why there are such low pressures in the centre of vortices, which consist of concentric circles of streamlines. This also is a way to intuitively explain why airfoils generate lift forces. == Exact solutions == All potential flow solutions are also solutions of the Euler equations, and in particular the incompressible Euler equations when the potential is harmonic. Solutions to the Euler equations with vorticity are: parallel shear flows – where the flow is unidirectional, and the flow velocity only varies in the cross-flow directions, e.g. in a Cartesian coordinate system ( x , y , z ) {\displaystyle (x,y,z)} the flow is for instance in the x {\displaystyle x} -direction – with the only non-zero velocity component being u x ( y , z ) {\displaystyle u_{x}(y,z)} only dependent on y {\displaystyle y} and z {\displaystyle z} and not on x . {\displaystyle x.} Arnold–Beltrami–Childress flow – an exact solution of the incompressible Euler equations. Two solutions of the three-dimensional Euler equations with cylindrical symmetry have been presented by Gibbon, Moore and Stuart in 2003. These two solutions have infinite energy; they blow up everywhere in space in finite time. == See also == Bernoulli's theorem Kelvin's circulation theorem Cauchy equations Froude number Madelung equations Navier–Stokes equations Burgers equation Jeans equations Perfect fluid D'Alembert's paradox == References == === Notes === === Citations === === Sources === === Further reading ===
{ "page_id": 396022, "source": null, "title": "Euler equations (fluid dynamics)" }
Quantum photoelectrochemistry is the investigation of the quantum mechanical nature of photoelectrochemistry, the subfield of study within physical chemistry concerned with the interaction of light with electrochemical systems, typically through the application of quantum chemical calculations. Quantum photoelectrochemistry provides an expansion of quantum electrochemistry to processes involving also the interaction with light (photons). It therefore also includes essential elements of photochemistry. Key aspects of quantum photoelectrochemistry are calculations of optical excitations, photoinduced electron and energy transfer processes, excited state evolution, as well as interfacial charge separation and charge transport in nanoscale energy conversion systems. Quantum photoelectrochemistry in particular provides fundamental insight into basic light-harvesting and photoinduced electro-optical processes in several emerging solar energy conversion technologies for generation of both electricity (photovoltaics) and solar fuels. Examples of such applications where quantum photoelectrochemistry provides insight into fundamental processes include photoelectrochemical cells, semiconductor photochemistry, as well as light-driven electrocatalysis in general, and artificial photosynthesis in particular. Quantum photoelectrochemistry constitutes an active line of current research, with several publications appearing in recent years that relate to several different types of materials and processes, including light-harvesting complexes, light-harvesting polymers, as well as nanocrystalline semiconductor materials. == References == == External links == Quantum Photoelectrochemistry research group at Lund University, Sweden
{ "page_id": 48630521, "source": null, "title": "Quantum photoelectrochemistry" }
Matrix metalloproteinase 15 also known as MMP15 is an enzyme that in humans is encoded by the MMP15 gene. == Function == Proteins of the matrix metalloproteinase (MMP) family are involved in the breakdown of extracellular matrix in normal physiological processes, such as embryonic development, reproduction, and tissue remodeling, as well as in disease processes, such as arthritis and metastasis. Most MMP's are secreted as inactive proenzymes which are activated when cleaved by extracellular proteinases. However, the protein encoded by this gene is a member of the membrane-type MMP (MT-MMP) subfamily; members of this subfamily can be anchored to the extracellular membrane by either a transmembrane domain or glycophosphatidylinositol linkage, suggesting that these proteins are expressed at the cell surface rather than secreted in a soluble form. == References == == Further reading == == External links == The MEROPS online database for peptidases and their inhibitors: M10.015 This article incorporates text from the United States National Library of Medicine, which is in the public domain.
{ "page_id": 21760762, "source": null, "title": "MMP15" }
The far-western blot, or far-western blotting, is a molecular biological method based on the technique of western blot to detect protein-protein interaction in vitro. Whereas western blot uses an antibody probe to detect a protein of interest, far-western blot uses a non-antibody probe which can bind the protein of interest. Thus, whereas western blotting is used for the detection of certain proteins, far-western blotting is employed to detect protein/protein interactions. == Method == In conventional western blot, gel electrophoresis is used to separate proteins from a sample; these proteins are then transferred to a membrane in a 'blotting' step. In a western blot, specific proteins are then identified using an antibody probe. Far-western blot employs non-antibody proteins to probe the protein of interest on the blot. In this way, binding partners of the probe (or the blotted) protein may be identified. The probe protein is often produced in E. coli using an expression cloning vector. The probe protein can then be visualized through the usual methods — it may be radiolabelled; it may bear a specific affinity tag like His or FLAG for which antibodies exist; or there may be a protein specific antibody (to the probe protein). Because cell extracts are usually completely denatured by boiling in detergent before gel electrophoresis, this approach is most useful for detecting interactions that do not require the native folded structure of the protein of interest. == References == == External links == Far-western+Blotting at the U.S. National Library of Medicine Medical Subject Headings (MeSH) Overview at piercenet.com Overview at utoronto.ca
{ "page_id": 5638908, "source": null, "title": "Far-western blot" }
Hilbert C*-modules are mathematical objects that generalise the notion of Hilbert spaces (which are themselves generalisations of Euclidean space), in that they endow a linear space with an "inner product" that takes values in a C*-algebra. They were first introduced in the work of Irving Kaplansky in 1953, which developed the theory for commutative, unital algebras (though Kaplansky observed that the assumption of a unit element was not "vital"). In the 1970s the theory was extended to non-commutative C*-algebras independently by William Lindall Paschke and Marc Rieffel, the latter in a paper that used Hilbert C*-modules to construct a theory of induced representations of C*-algebras. Hilbert C*-modules are crucial to Kasparov's formulation of KK-theory, and provide the right framework to extend the notion of Morita equivalence to C*-algebras. They can be viewed as the generalization of vector bundles to noncommutative C*-algebras and as such play an important role in noncommutative geometry, notably in C*-algebraic quantum group theory, and groupoid C*-algebras. == Definitions == === Inner-product C*-modules === Let A {\displaystyle A} be a C*-algebra (not assumed to be commutative or unital), its involution denoted by ∗ {\displaystyle {}^{*}} . An inner-product A {\displaystyle A} -module (or pre-Hilbert A {\displaystyle A} -module) is a complex linear space E {\displaystyle E} equipped with a compatible right A {\displaystyle A} -module structure, together with a map ⟨ ⋅ , ⋅ ⟩ A : E × E → A {\displaystyle \langle \,\cdot \,,\,\cdot \,\rangle _{A}:E\times E\rightarrow A} that satisfies the following properties: For all x {\displaystyle x} , y {\displaystyle y} , z {\displaystyle z} in E {\displaystyle E} , and α {\displaystyle \alpha } , β {\displaystyle \beta } in C {\displaystyle \mathbb {C} } : ⟨ x , y α + z β ⟩ A = ⟨ x , y ⟩ A
{ "page_id": 16059132, "source": null, "title": "Hilbert C*-module" }
α + ⟨ x , z ⟩ A β {\displaystyle \langle x,y\alpha +z\beta \rangle _{A}=\langle x,y\rangle _{A}\alpha +\langle x,z\rangle _{A}\beta } (i.e. the inner product is C {\displaystyle \mathbb {C} } -linear in its second argument). For all x {\displaystyle x} , y {\displaystyle y} in E {\displaystyle E} , and a {\displaystyle a} in A {\displaystyle A} : ⟨ x , y a ⟩ A = ⟨ x , y ⟩ A a {\displaystyle \langle x,ya\rangle _{A}=\langle x,y\rangle _{A}a} For all x {\displaystyle x} , y {\displaystyle y} in E {\displaystyle E} : ⟨ x , y ⟩ A = ⟨ y , x ⟩ A ∗ , {\displaystyle \langle x,y\rangle _{A}=\langle y,x\rangle _{A}^{*},} from which it follows that the inner product is conjugate linear in its first argument (i.e. it is a sesquilinear form). For all x {\displaystyle x} in E {\displaystyle E} : ⟨ x , x ⟩ A ≥ 0 {\displaystyle \langle x,x\rangle _{A}\geq 0} in the sense of being a positive element of A, and ⟨ x , x ⟩ A = 0 ⟺ x = 0. {\displaystyle \langle x,x\rangle _{A}=0\iff x=0.} (An element of a C*-algebra A {\displaystyle A} is said to be positive if it is self-adjoint with non-negative spectrum.) === Hilbert C*-modules === An analogue to the Cauchy–Schwarz inequality holds for an inner-product A {\displaystyle A} -module E {\displaystyle E} : ⟨ x , y ⟩ A ⟨ y , x ⟩ A ≤ ‖ ⟨ y , y ⟩ A ‖ ⟨ x , x ⟩ A {\displaystyle \langle x,y\rangle _{A}\langle y,x\rangle _{A}\leq \Vert \langle y,y\rangle _{A}\Vert \langle x,x\rangle _{A}} for x {\displaystyle x} , y {\displaystyle y} in E {\displaystyle E} . On the pre-Hilbert module E {\displaystyle E} , define a norm by ‖ x ‖ = ‖ ⟨
{ "page_id": 16059132, "source": null, "title": "Hilbert C*-module" }
x , x ⟩ A ‖ 1 2 . {\displaystyle \Vert x\Vert =\Vert \langle x,x\rangle _{A}\Vert ^{\frac {1}{2}}.} The norm-completion of E {\displaystyle E} , still denoted by E {\displaystyle E} , is said to be a Hilbert A {\displaystyle A} -module or a Hilbert C*-module over the C*-algebra A {\displaystyle A} . The Cauchy–Schwarz inequality implies the inner product is jointly continuous in norm and can therefore be extended to the completion. The action of A {\displaystyle A} on E {\displaystyle E} is continuous: for all x {\displaystyle x} in E {\displaystyle E} a λ → a ⇒ x a λ → x a . {\displaystyle a_{\lambda }\rightarrow a\Rightarrow xa_{\lambda }\rightarrow xa.} Similarly, if ( e λ ) {\displaystyle (e_{\lambda })} is an approximate unit for A {\displaystyle A} (a net of self-adjoint elements of A {\displaystyle A} for which a e λ {\displaystyle ae_{\lambda }} and e λ a {\displaystyle e_{\lambda }a} tend to a {\displaystyle a} for each a {\displaystyle a} in A {\displaystyle A} ), then for x {\displaystyle x} in E {\displaystyle E} x e λ → x . {\displaystyle xe_{\lambda }\rightarrow x.} Whence it follows that E A {\displaystyle EA} is dense in E {\displaystyle E} , and x 1 A = x {\displaystyle x1_{A}=x} when A {\displaystyle A} is unital. Let ⟨ E , E ⟩ A = span ⁡ { ⟨ x , y ⟩ A ∣ x , y ∈ E } , {\displaystyle \langle E,E\rangle _{A}=\operatorname {span} \{\langle x,y\rangle _{A}\mid x,y\in E\},} then the closure of ⟨ E , E ⟩ A {\displaystyle \langle E,E\rangle _{A}} is a two-sided ideal in A {\displaystyle A} . Two-sided ideals are C*-subalgebras and therefore possess approximate units. One can verify that E ⟨ E , E ⟩ A {\displaystyle E\langle E,E\rangle _{A}} is
{ "page_id": 16059132, "source": null, "title": "Hilbert C*-module" }
dense in E {\displaystyle E} . In the case when ⟨ E , E ⟩ A {\displaystyle \langle E,E\rangle _{A}} is dense in A {\displaystyle A} , E {\displaystyle E} is said to be full. This does not generally hold. == Examples == === Hilbert spaces === Since the complex numbers C {\displaystyle \mathbb {C} } are a C*-algebra with an involution given by complex conjugation, a complex Hilbert space H {\displaystyle {\mathcal {H}}} is a Hilbert C {\displaystyle \mathbb {C} } -module under scalar multipliation by complex numbers and its inner product. === Vector bundles === If X {\displaystyle X} is a locally compact Hausdorff space and E {\displaystyle E} a vector bundle over X {\displaystyle X} with projection π : E → X {\displaystyle \pi \colon E\to X} a Hermitian metric g {\displaystyle g} , then the space of continuous sections of E {\displaystyle E} is a Hilbert C ( X ) {\displaystyle C(X)} -module. Given sections σ , ρ {\displaystyle \sigma ,\rho } of E {\displaystyle E} and f ∈ C ( X ) {\displaystyle f\in C(X)} the right action is defined by σ f ( x ) = σ ( x ) f ( π ( x ) ) , {\displaystyle \sigma f(x)=\sigma (x)f(\pi (x)),} and the inner product is given by ⟨ σ , ρ ⟩ C ( X ) ( x ) := g ( σ ( x ) , ρ ( x ) ) . {\displaystyle \langle \sigma ,\rho \rangle _{C(X)}(x):=g(\sigma (x),\rho (x)).} The converse holds as well: Every countably generated Hilbert C*-module over a commutative unital C*-algebra A = C ( X ) {\displaystyle A=C(X)} is isomorphic to the space of sections vanishing at infinity of a continuous field of Hilbert spaces over X {\displaystyle X} . === C*-algebras === Any C*-algebra
{ "page_id": 16059132, "source": null, "title": "Hilbert C*-module" }
A {\displaystyle A} is a Hilbert A {\displaystyle A} -module with the action given by right multiplication in A {\displaystyle A} and the inner product ⟨ a , b ⟩ = a ∗ b {\displaystyle \langle a,b\rangle =a^{*}b} . By the C*-identity, the Hilbert module norm coincides with C*-norm on A {\displaystyle A} . The (algebraic) direct sum of n {\displaystyle n} copies of A {\displaystyle A} A n = ⨁ i = 1 n A {\displaystyle A^{n}=\bigoplus _{i=1}^{n}A} can be made into a Hilbert A {\displaystyle A} -module by defining ⟨ ( a i ) , ( b i ) ⟩ A = ∑ i = 1 n a i ∗ b i . {\displaystyle \langle (a_{i}),(b_{i})\rangle _{A}=\sum _{i=1}^{n}a_{i}^{*}b_{i}.} If p {\displaystyle p} is a projection in the C*-algebra M n ( A ) {\displaystyle M_{n}(A)} , then p A n {\displaystyle pA^{n}} is also a Hilbert A {\displaystyle A} -module with the same inner product as the direct sum. === The standard Hilbert module === One may also consider the following subspace of elements in the countable direct product of A {\displaystyle A} ℓ 2 ( A ) = H A = { ( a i ) | ∑ i = 1 ∞ a i ∗ a i converges in A } . {\displaystyle \ell _{2}(A)={\mathcal {H}}_{A}={\Big \{}(a_{i})|\sum _{i=1}^{\infty }a_{i}^{*}a_{i}{\text{ converges in }}A{\Big \}}.} Endowed with the obvious inner product (analogous to that of A n {\displaystyle A^{n}} ), the resulting Hilbert A {\displaystyle A} -module is called the standard Hilbert module over A {\displaystyle A} . The fact that there is a unique separable Hilbert space has a generalization to Hilbert modules in the form of the Kasparov stabilization theorem, which states that if E {\displaystyle E} is a countably generated Hilbert A {\displaystyle A} -module, there
{ "page_id": 16059132, "source": null, "title": "Hilbert C*-module" }
is an isometric isomorphism E ⊕ ℓ 2 ( A ) ≅ ℓ 2 ( A ) . {\displaystyle E\oplus \ell ^{2}(A)\cong \ell ^{2}(A).} == Maps between Hilbert modules == Let E {\displaystyle E} and F {\displaystyle F} be two Hilbert modules over the same C*-algebra A {\displaystyle A} . These are then Banach spaces, so it is possible to speak of the Banach space of bounded linear maps L ( E , F ) {\displaystyle {\mathcal {L}}(E,F)} , normed by the operator norm. The adjointable and compact adjointable operators are subspaces of this Banach space defined using the inner product structures on E {\displaystyle E} and F {\displaystyle F} . In the special case where A {\displaystyle A} is C {\displaystyle \mathbb {C} } these reduce to bounded and compact operators on Hilbert spaces respectively. === Adjointable maps === A map (not necessarily linear) T : E → F {\displaystyle T\colon E\to F} is defined to be adjointable if there is another map T ∗ : F → E {\displaystyle T^{*}\colon F\to E} , known as the adjoint of T {\displaystyle T} , such that for every e ∈ E {\displaystyle e\in E} and f ∈ F {\displaystyle f\in F} , ⟨ f , T e ⟩ = ⟨ T ∗ f , e ⟩ . {\displaystyle \langle f,Te\rangle =\langle T^{*}f,e\rangle .} Both T {\displaystyle T} and T ∗ {\displaystyle T^{*}} are then automatically linear and also A {\displaystyle A} -module maps. The closed graph theorem can be used to show that they are also bounded. Analogously to the adjoint of operators on Hilbert spaces, T ∗ {\displaystyle T^{*}} is unique (if it exists) and itself adjointable with adjoint T {\displaystyle T} . If S : F → G {\displaystyle S\colon F\to G} is a second adjointable map, S
{ "page_id": 16059132, "source": null, "title": "Hilbert C*-module" }
T {\displaystyle ST} is adjointable with adjoint S ∗ T ∗ {\displaystyle S^{*}T^{*}} . The adjointable operators E → F {\displaystyle E\to F} form a subspace B ( E , F ) {\displaystyle \mathbb {B} (E,F)} of L ( E , F ) {\displaystyle {\mathcal {L}}(E,F)} , which is complete in the operator norm. In the case F = E {\displaystyle F=E} , the space B ( E , E ) {\displaystyle \mathbb {B} (E,E)} of adjointable operators from E {\displaystyle E} to itself is denoted B ( E ) {\displaystyle \mathbb {B} (E)} , and is a C*-algebra. === Compact adjointable maps === Given e ∈ E {\displaystyle e\in E} and f ∈ F {\displaystyle f\in F} , the map | f ⟩ ⟨ e | : E → F {\displaystyle |f\rangle \langle e|\colon E\to F} is defined, analogously to the rank one operators of Hilbert spaces, to be g ↦ f ⟨ e , g ⟩ . {\displaystyle g\mapsto f\langle e,g\rangle .} This is adjointable with adjoint | e ⟩ ⟨ f | {\displaystyle |e\rangle \langle f|} . The compact adjointable operators K ( E , F ) {\displaystyle \mathbb {K} (E,F)} are defined to be the closed span of { | f ⟩ ⟨ e | ∣ e ∈ E , f ∈ F } {\displaystyle \{|f\rangle \langle e|\mid e\in E,\;f\in F\}} in B ( E , F ) {\displaystyle \mathbb {B} (E,F)} . As with the bounded operators, K ( E , E ) {\displaystyle \mathbb {K} (E,E)} is denoted K ( E ) {\displaystyle \mathbb {K} (E)} . This is a (closed, two-sided) ideal of B ( E ) {\displaystyle \mathbb {B} (E)} . == C*-correspondences == If A {\displaystyle A} and B {\displaystyle B} are C*-algebras, an ( A , B ) {\displaystyle (A,B)}
{ "page_id": 16059132, "source": null, "title": "Hilbert C*-module" }
C*-correspondence is a Hilbert B {\displaystyle B} -module equipped with a left action of A {\displaystyle A} by adjointable maps that is faithful. (NB: Some authors require the left action to be non-degenerate instead.) These objects are used in the formulation of Morita equivalence for C*-algebras, see applications in the construction of Toeplitz and Cuntz-Pimsner algebras, and can be employed to put the structure of a bicategory on the collection of C*-algebras. === Tensor products and the bicategory of correspondences === If E {\displaystyle E} is an ( A , B ) {\displaystyle (A,B)} and F {\displaystyle F} a ( B , C ) {\displaystyle (B,C)} correspondence, the algebraic tensor product E ⊙ F {\displaystyle E\odot F} of E {\displaystyle E} and F {\displaystyle F} as vector spaces inherits left and right A {\displaystyle A} - and C {\displaystyle C} -module structures respectively. It can also be endowed with the C {\displaystyle C} -valued sesquilinear form defined on pure tensors by ⟨ e ⊙ f , e ′ ⊙ f ′ ⟩ C := ⟨ f , ⟨ e , e ′ ⟩ B f ⟩ C . {\displaystyle \langle e\odot f,e'\odot f'\rangle _{C}:=\langle f,\langle e,e'\rangle _{B}f\rangle _{C}.} This is positive semidefinite, and the Hausdorff completion of E ⊙ F {\displaystyle E\odot F} in the resulting seminorm is denoted E ⊗ B F {\displaystyle E\otimes _{B}F} . The left- and right-actions of A {\displaystyle A} and C {\displaystyle C} extend to make this an ( A , C ) {\displaystyle (A,C)} correspondence. The collection of C*-algebras can then be endowed with the structure of a bicategory, with C*-algebras as objects, ( A , B ) {\displaystyle (A,B)} correspondences as arrows B → A {\displaystyle B\to A} , and isomorphisms of correspondences (bijective module maps that preserve inner products) as 2-arrows.
{ "page_id": 16059132, "source": null, "title": "Hilbert C*-module" }
=== Toeplitz algebra of a correspondence === Given a C*-algebra A {\displaystyle A} , and an ( A , A ) {\displaystyle (A,A)} correspondence E {\displaystyle E} , its Toeplitz algebra T ( E ) {\displaystyle {\mathcal {T}}(E)} is defined as the universal algebra for Toeplitz representations (defined below). The classical Toeplitz algebra can be recovered as a special case, and the Cuntz-Pimsner algebras are defined as particular quotients of Toeplitz algebras. In particular, graph algebras , crossed products by Z {\displaystyle \mathbb {Z} } , and the Cuntz algebras are all quotients of specific Toeplitz algebras. ==== Toeplitz representations ==== A Toeplitz representation of E {\displaystyle E} in a C*-algebra D {\displaystyle D} is a pair ( S , ϕ ) {\displaystyle (S,\phi )} of a linear map S : E → D {\displaystyle S\colon E\to D} and a homomorphism ϕ : A → D {\displaystyle \phi \colon A\to D} such that S {\displaystyle S} is "isometric": S ( e ) ∗ S ( f ) = ϕ ( ⟨ e , f ⟩ ) {\displaystyle S(e)^{*}S(f)=\phi (\langle e,f\rangle )} for all e , f ∈ E {\displaystyle e,f\in E} , S {\displaystyle S} resembles a bimodule map: S ( a e ) = ϕ ( a ) S ( e ) {\displaystyle S(ae)=\phi (a)S(e)} and S ( e a ) = S ( e ) ϕ ( a ) {\displaystyle S(ea)=S(e)\phi (a)} for e ∈ E {\displaystyle e\in E} and a ∈ A {\displaystyle a\in A} . ==== Toeplitz algebra ==== The Toeplitz algebra T ( E ) {\displaystyle {\mathcal {T}}(E)} is the universal Toeplitz representation. That is, there is a Toeplitz representation ( T , ι ) {\displaystyle (T,\iota )} of E {\displaystyle E} in T ( E ) {\displaystyle {\mathcal {T}}(E)} such that if ( S
{ "page_id": 16059132, "source": null, "title": "Hilbert C*-module" }
, ϕ ) {\displaystyle (S,\phi )} is any Toeplitz representation of E {\displaystyle E} (in an arbitrary algebra D {\displaystyle D} ) there is a unique *-homomorphism Φ : T ( E ) → D {\displaystyle \Phi \colon {\mathcal {T}}(E)\to D} such that S = Φ ∘ T {\displaystyle S=\Phi \circ T} and ϕ = Φ ∘ ι {\displaystyle \phi =\Phi \circ \iota } . ==== Examples ==== If A {\displaystyle A} is taken to be the algebra of complex numbers, and E {\displaystyle E} the vector space C n {\displaystyle \mathbb {C} ^{n}} , endowed with the natural ( C , C ) {\displaystyle (\mathbb {C} ,\mathbb {C} )} -bimodule structure, the corresponding Toeplitz algebra is the universal algebra generated by n {\displaystyle n} isometries with mutually orthogonal range projections. In particular, T ( C ) {\displaystyle {\mathcal {T}}(\mathbb {C} )} is the universal algebra generated by a single isometry, which is the classical Toeplitz algebra. == See also == Operator algebra == Notes == == References == Lance, E. Christopher (1995). Hilbert C*-modules: A toolkit for operator algebraists. London Mathematical Society Lecture Note Series. Cambridge, England: Cambridge University Press. Wegge-Olsen, N. E. (1993). K-Theory and C*-Algebras. Oxford University Press. Brown, Nathanial P.; Ozawa, Narutaka (2008). C*-Algebras and Finite-Dimensional Approximations. American Mathematical Society. Buss, Alcides; Meyer, Ralf; Zhu, Chenchang (2013). "A higher category approach to twisted actions on c* -algebras". Proceedings of the Edinburgh Mathematical Society. 56 (2): 387–426. arXiv:0908.0455. doi:10.1017/S0013091512000259. Fowler, Neal J.; Raeburn, Iain (1999). "The Toeplitz algebra of a Hilbert bimodule". Indiana University Mathematics Journal. 48 (1): 155–181. arXiv:math/9806093. doi:10.1512/iumj.1999.48.1639. JSTOR 24900141. == External links == Weisstein, Eric W. "Hilbert C*-Module". MathWorld. Hilbert C*-Modules Home Page, a literature list
{ "page_id": 16059132, "source": null, "title": "Hilbert C*-module" }
Artificial intelligence and machine learning techniques are used in video games for a wide variety of applications such as non-player character (NPC) control, procedural content generation (PCG) and deep learning-based content generation. Machine learning is a subset of artificial intelligence that uses historical data to build predictive and analytical models. This is in sharp contrast to traditional methods of artificial intelligence such as search trees and expert systems. Information on machine learning techniques in the field of games is mostly known to public through research projects as most gaming companies choose not to publish specific information about their intellectual property. The most publicly known application of machine learning in games is likely the use of deep learning agents that compete with professional human players in complex strategy games. There has been a significant application of machine learning on games such as Atari/ALE, Doom, Minecraft, StarCraft, and car racing. Other games that did not originally exists as video games, such as chess and Go have also been affected by the machine learning. == Overview of relevant machine learning techniques == === Deep learning === Deep learning is a subset of machine learning which focuses heavily on the use of artificial neural networks (ANN) that learn to solve complex tasks. Deep learning uses multiple layers of ANN and other techniques to progressively extract information from an input. Due to this complex layered approach, deep learning models often require powerful machines to train and run on. ==== Convolutional neural networks ==== Convolutional neural networks (CNN) are specialized ANNs that are often used to analyze image data. These types of networks are able to learn translation invariant patterns, which are patterns that are not dependent on location. CNNs are able to learn these patterns in a hierarchy, meaning that earlier convolutional layers will learn
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
smaller local patterns while later layers will learn larger patterns based on the previous patterns. A CNN's ability to learn visual data has made it a commonly used tool for deep learning in games. === Recurrent neural network === Recurrent neural networks are a type of ANN that are designed to process sequences of data in order, one part at a time rather than all at once. An RNN runs over each part of a sequence, using the current part of the sequence along with memory of previous parts of the current sequence to produce an output. These types of ANN are highly effective at tasks such as speech recognition and other problems that depend heavily on temporal order. There are several types of RNNs with different internal configurations; the basic implementation suffers from a lack of long term memory due to the vanishing gradient problem, thus it is rarely used over newer implementations. ==== Long short-term memory ==== A long short-term memory (LSTM) network is a specific implementation of a RNN that is designed to deal with the vanishing gradient problem seen in simple RNNs, which would lead to them gradually "forgetting" about previous parts of an inputted sequence when calculating the output of a current part. LSTMs solve this problem with the addition of an elaborate system that uses an additional input/output to keep track of long term data. LSTMs have achieved very strong results across various fields, and were used by several monumental deep learning agents in games. === Reinforcement learning === Reinforcement learning is the process of training an agent using rewards and/or punishments. The way an agent is rewarded or punished depends heavily on the problem; such as giving an agent a positive reward for winning a game or a negative one for losing. Reinforcement
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
learning is used heavily in the field of machine learning and can be seen in methods such as Q-learning, policy search, Deep Q-networks and others. It has seen strong performance in both the field of games and robotics. === Neuroevolution === Neuroevolution involves the use of both neural networks and evolutionary algorithms. Instead of using gradient descent like most neural networks, neuroevolution models make use of evolutionary algorithms to update neurons in the network. Researchers claim that this process is less likely to get stuck in a local minimum and is potentially faster than state of the art deep learning techniques. == Deep learning agents == Machine learning agents have been used to take the place of a human player rather than function as NPCs, which are deliberately added into video games as part of designed gameplay. Deep learning agents have achieved impressive results when used in competition with both humans and other artificial intelligence agents. === Chess === Chess is a turn-based strategy game that is considered a difficult AI problem due to the computational complexity of its board space. Similar strategy games are often solved with some form of a Minimax Tree Search. These types of AI agents have been known to beat professional human players, such as the historic 1997 Deep Blue versus Garry Kasparov match. Since then, machine learning agents have shown ever greater success than previous AI agents. === Go === Go is another turn-based strategy game which is considered an even more difficult AI problem than chess. The state space of is Go is around 10^170 possible board states compared to the 10^120 board states for Chess. Prior to recent deep learning models, AI Go agents were only able to play at the level of a human amateur. ==== AlphaGo ==== Google's 2015 AlphaGo
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
was the first AI agent to beat a professional Go player. AlphaGo used a deep learning model to train the weights of a Monte Carlo tree search (MCTS). The deep learning model consisted of 2 ANN, a policy network to predict the probabilities of potential moves by opponents, and a value network to predict the win chance of a given state. The deep learning model allows the agent to explore potential game states more efficiently than a vanilla MCTS. The network were initially trained on games of humans players and then were further trained by games against itself. ==== AlphaGo Zero ==== AlphaGo Zero, another implementation of AlphaGo, was able to train entirely by playing against itself. It was able to quickly train up to the capabilities of the previous agent. === StarCraft series === StarCraft and its sequel StarCraft II are real-time strategy (RTS) video games that have become popular environments for AI research. Blizzard and DeepMind have worked together to release a public StarCraft 2 environment for AI research to be done on. Various deep learning methods have been tested on both games, though most agents usually have trouble outperforming the default AI with cheats enabled or skilled players of the game. ==== Alphastar ==== Alphastar was the first AI agent to beat professional StarCraft 2 players without any in-game advantages. The deep learning network of the agent initially received input from a simplified zoomed out version of the gamestate, but was later updated to play using a camera like other human players. The developers have not publicly released the code or architecture of their model, but have listed several state of the art machine learning techniques such as relational deep reinforcement learning, long short-term memory, auto-regressive policy heads, pointer networks, and centralized value baseline. Alphastar was initially
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
trained with supervised learning, it watched replays of many human games in order to learn basic strategies. It then trained against different versions of itself and was improved through reinforcement learning. The final version was hugely successful, but only trained to play on a specific map in a protoss mirror matchup. === Dota 2 === Dota 2 is a multiplayer online battle arena (MOBA) game. Like other complex games, traditional AI agents have not been able to compete on the same level as professional human player. The only widely published information on AI agents attempted on Dota 2 is OpenAI's deep learning Five agent. ==== OpenAI Five ==== OpenAI Five utilized separate long short-term memory networks to learn each hero. It trained using a reinforcement learning technique known as Proximal Policy Learning running on a system containing 256 GPUs and 128,000 CPU cores. Five trained for months, accumulating 180 years of game experience each day, before facing off with professional players. It was eventually able to beat the 2018 Dota 2 esports champion team in a 2019 series of games. === Planetary Annihilation === Planetary Annihilation is a real-time strategy game which focuses on massive scale war. The developers use ANNs in their default AI agent. === Supreme Commander 2 === Supreme Commander 2 is a real-time strategy (RTS) video game. The game uses Multilayer Perceptrons (MLPs) to control a platoon’s reaction to encountered enemy units. Total of four MLPs are used, one for each platoon type: land, naval, bomber, and fighter. === Gran Turismo === Gran Turismo is a PlayStation game franchise that simulates realistic racing and driving experiences. In 2022, Sony AI researchers presented Sophy, an agent which can play Gran Turismo with performance on par with or superior to the world’s best e-sports drivers. The implemented solution
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
is based on model-free deep reinforcement learning. === Generalized games === There have been attempts to make machine learning agents that are able to play more than one game. These "general" gaming agents are trained to understand games based on shared properties between them. ==== AlphaZero ==== AlphaZero is a modified version of AlphaGo Zero which is able to play Shogi, chess, and Go. The modified agent starts with only basic rules of the game, and is also trained entirely through self-learning. DeepMind was able to train this generalized agent to be competitive with previous versions of itself on Go, as well as top agents in the other two games. === Strengths and weaknesses of deep learning agents === Machine learning agents are often not covered in many game design courses. Previous use of machine learning agents in games may not have been very practical, as even the 2015 version of AlphaGo took hundreds of CPUs and GPUs to train to a strong level. This potentially limits the creation of highly effective deep learning agents to large corporations or extremely wealthy individuals. The extensive training time of neural network based approaches can also take weeks on these powerful machines. The problem of effectively training ANN based models extends beyond powerful hardware environments; finding a good way to represent data and learn meaningful things from it is also often a difficult problem. ANN models often overfit to very specific data and perform poorly in more generalized cases. AlphaStar shows this weakness, despite being able to beat professional players, it is only able to do so on a single map when playing a mirror protoss matchup. OpenAI Five also shows this weakness, it was only able to beat professional player when facing a very limited hero pool out of the entire game.
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
This example show how difficult it can be to train a deep learning agent to perform in more generalized situations. Machine learning agents have shown great success in a variety of different games. However, agents that are too competent also risk making games too difficult for new or casual players. Research has shown that challenge that is too far above a player's skill level will ruin lower player enjoyment. These highly trained agents are likely only desirable against very skilled human players who have many of hours of experience in a given game. Given these factors, highly effective deep learning agents are likely only a desired choice in games that have a large competitive scene, where they can function as an alternative practice option to a skilled human player. == Computer vision-based players == Computer vision focuses on training computers to gain a high-level understanding of digital images or videos. Many computer vision techniques also incorporate forms of machine learning, and have been applied on various video games. This application of computer vision focuses on interpreting game events using visual data. In some cases, artificial intelligence agents have used model-free techniques to learn to play games without any direct connection to internal game logic, solely using video data as input. === Pong === Andrej Karpathy has demonstrated that relatively trivial neural network with just one hidden layer is capable of being trained to play Pong based on screen data alone. === Atari games === In 2013, a team at DeepMind demonstrated the use of deep Q-learning to play a variety of Atari video games — Beamrider, Breakout, Enduro, Pong, Q*bert, Seaquest, and Space Invaders — from screen data. The team expanded their work to create a learning algorithm called MuZero that was able to "learn" the rules and develop winning
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
strategies for over 50 different Atari games based on screen data. === Doom === Doom (1993) is a first-person shooter (FPS) game. Student researchers from Carnegie Mellon University used computer vision techniques to create an agent that could play the game using only image pixel input from the game. The students used convolutional neural network (CNN) layers to interpret incoming image data and output valid information to a recurrent neural network which was responsible for outputting game moves. === Super Mario === Other uses of vision-based deep learning techniques for playing games have included playing Super Mario Bros. only using image input, using deep Q-learning for training. === Minecraft === Researchers with OpenAI created about 2000 hours of video plays of Minecraft coded with the necessary human inputs, and then trained a machine learning model to comprehend the video feedback from the input. The researchers then used that model with 70,000 hours of Minecraft playthroughs offered on YouTube to see how well the model could create the input to match that behavior and learn further from it, such as being able to learn the steps and process of creating a diamond pickaxe tool. == Machine learning for procedural content generation in games == Machine learning has seen research for use in content recommendation and generation. Procedural content generation is the process of creating data algorithmically rather than manually. This type of content is used to add replayability to games without relying on constant additions by human developers. PCG has been used in various games for different types of content generation, examples of which include weapons in Borderlands 2, all world layouts in Minecraft and entire universes in No Man's Sky. Common approaches to PCG include techniques that involve grammars, search-based algorithms, and logic programming. These approaches require humans to manually
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
define the range of content possible, meaning that a human developer decides what features make up a valid piece of generated content. Machine learning is theoretically capable of learning these features when given examples to train off of, thus greatly reducing the complicated step of developers specifying the details of content design. Machine learning techniques used for content generation include Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN), Generative Adversarial networks (GAN), and K-means clustering. Not all of these techniques make use of ANNs, but the rapid development of deep learning has greatly increased the potential of techniques that do. === Galactic Arms Race === Galactic Arms Race is a space shooter video game that uses neuroevolution powered PCG to generate unique weapons for the player. This game was a finalist in the 2010 Indie Game Challenge and its related research paper won the Best Paper Award at the 2009 IEEE Conference on Computational Intelligence and Games. The developers use a form of neuroevolution called cgNEAT to generate new content based on each player's personal preferences. Each generated item is represented by a special ANN known as a Compositional Pattern Producing Network (CPPNs). During the evolutionary phase of the game cgNEAT calculates the fitness of current items based on player usage and other gameplay metrics, this fitness score is then used decide which CPPNs will reproduce to create a new item. The ending result is the generation of new weapon effects based on the player's preference. === Super Mario Bros. === Super Mario Bros. has been used by several researchers to simulate PCG level creation. Various attempts having used different methods. A version in 2014 used n-grams to generate levels similar to the ones it trained on, which was later improved by making use of MCTS to guide generation. These
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
generations were often not optimal when taking gameplay metrics such as player movement into account, a separate research project in 2017 tried to resolve this problem by generating levels based on player movement using Markov Chains. These projects were not subjected to human testing and may not meet human playability standards. === The Legend of Zelda === PCG level creation for The Legend of Zelda has been attempted by researchers at the University of California, Santa Cruz. This attempt made use of a Bayesian Network to learn high level knowledge from existing levels, while Principal Component Analysis (PCA) was used to represent the different low level features of these levels. The researchers used PCA to compare generated levels to human made levels and found that they were considered very similar. This test did not include playability or human testing of the generated levels. == Deep learning for content generation in games == The introduction of Generative Adversarial Networks first, and then of diffusion models allows for generating in-game content at runtime using non-procedural approaches. Examples include: The 3D printer available in InZOI (available in early access), a life simulation game developed by InZOI Studio and published by Krafton. Given a 2D image file provided by the user, the 3D printer generates the corresponding 3D object, which the user can then place in the game or use to decorate its avatar. The terrain generation solution implemented in Prologue: Go Wayback! (early access: to be announced, with play tests undergoing in early 2025), a survival game developed by PlayerUnknown Productions. The approach allows to generate a new game map every time the game is launched. == Music generation == Music is often seen in video games and can be a crucial element for influencing the mood of different situations and story points.
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
Machine learning has seen use in the experimental field of music generation; it is uniquely suited to processing raw unstructured data and forming high level representations that could be applied to the diverse field of music. Most attempted methods have involved the use of ANN in some form. Methods include the use of basic feedforward neural networks, autoencoders, restricted boltzmann machines, recurrent neural networks, convolutional neural networks, generative adversarial networks (GANs), and compound architectures that use multiple methods. === VRAE video game melody symbolic music generation system === The 2014 research paper on "Variational Recurrent Auto-Encoders" attempted to generate music based on songs from 8 different video games. This project is one of the few conducted purely on video game music. The neural network in the project was able to generate data that was very similar to the data of the games it trained off of. The generated data did not translate into good quality music. == References == == External links ==
{ "page_id": 60951296, "source": null, "title": "Machine learning in video games" }
Flow-FISH (fluorescence in-situ hybridization) is a cytogenetic technique to quantify the copy number of RNA or specific repetitive elements in genomic DNA of whole cell populations via the combination of flow cytometry with cytogenetic fluorescent in situ hybridization staining protocols. Flow-FISH is most commonly used to quantify the length of telomeres, which are stretches of repetitious DNA (hexameric TTAGGG repeats) at the distal ends of chromosomes in human white blood cells, and a semi-automated method for doing so was published in Nature Protocols. Telomere length in white blood cells has been a subject of interest because telomere length in these cell types (and also of other somatic tissues) declines gradually over the human lifespan, resulting in cell senescence, apoptosis, or transformation. This decline has been shown to be a surrogate marker for the concomitant decline in the telomere length of the hematopoietic stem cell pool, with the granulocyte lineage giving the best indication, presumably due to the absence of a long lived memory subtype and comparatively rapid turnover of these cells. Flow-FISH is also suitable for the concomitant detection of RNA and protein. This allows for the identification of cells that not only express a gene, but also translate it into protein. This type of Flow-FISH has been used to study latent infection of viruses such as HIV-1 and EBV, but also to track single cell gene expression and translation into protein. == Q-FISH to flow-FISH == Flow-FISH was first published in 1998 by Rufer et al. as a modification of another technique for analyzing telomere length, Q-FISH, that employs peptide nucleic acid probes of a 3'-CCCTAACCCTAACCCTAA-5' sequence labeled with a fluorescin fluorophore to stain telomeric repeats on prepared metaphase spreads of cells that have been treated with colcemid, hypotonic shock, and fixation to slides via methanol/acetic acid treatment Images
{ "page_id": 15272708, "source": null, "title": "Flow-FISH" }
of the resultant fluorescent spots could then be analyzed via a specialized computer program to yield quantitative fluorescence values that can then be used to estimate actual telomere length. The fluorescence yielded by probe staining is considered to be quantitative because PNA binds preferentially to DNA at low ionic salt concentrations and in the presence of formamide, thus the DNA duplex may not reform once it has been melted and annealed to PNA probe, allowing the probe to saturate its target repeat sequence (as it is not displaced from the target DNA by competing anti sense DNA on the complementary strand), thus yielding a reliable and quantifiable readout of the frequency of PNA probe target at a given chromosomal site after washing away of unbound probe. === Innovation === Unlike Q-FISH, Flow-FISH utilizes the quantitative properties of telomere specific PNA probe retention to quantify median fluorescence in a population of cells, via the use of a flow cytometer, instead of a fluorescence microscope. The primary advantage of this technique is that it eliminates the time required in Q-FISH to prepare metaphase spreads of cells of interest, and that flow cytometric analysis is also considerably faster than the methods required to acquire and analyze Q-FISH prepared slides. Flow-FISH thus allows for a higher throughput analysis of telomere length in blood leukocytes, which are a readily available form of human tissue sample. The most recent versions of the flow-FISH technique include an internal control population of cow thymocytes with a known telomere length detected by TRF or telomere restriction fragment analysis to which the fluorescence of a given unknown sample may be compared. Because cow thymocytes take up LDS751 dye to a lesser extent than their human counterparts, they may be reliably differentiated via plotting and gating the desired populations. Other cell
{ "page_id": 15272708, "source": null, "title": "Flow-FISH" }
types that have not in the past proven to be good candidates for flow-FISH can be analyzed via extraction of nuclei and performance of the technique on them directly. == References ==
{ "page_id": 15272708, "source": null, "title": "Flow-FISH" }
The International Plant Nutrition Colloquium (IPNC) is an international conference held every four years for the promotion of research within the field of plant nutrition. Prior to 1981, it was known as the International Colloquium on Plant Analysis and Fertiliser Problems. The IPNC is organised by the International Plant Nutrition Council, which "seeks to advance science-based non-commercial research and education in plant nutrition in order to highlight the importance of this scientific field for crop production, food security, human health and sustainable environmental protection". It is considered that the IPNC is the most important international meeting on plant nutrition globally, with more than 800 delegates attending each meeting. The IPNC covers research in the fields of plant mineral nutrition, plant molecular biology, plant genetics, agronomy, horticulture, ecology, environmental sciences, and fertilizer use and production. In honour of Professor Horst Marschner, who was a passionate supporter of students and young researchers, the IPNC has established the Marschner Young Scientist Award for outstanding early-career researchers and PhD students with a potential to become future research leaders. The current President of the International Plant Nutrition Council is Professor Ciro A. Rosolem from the São Paulo State University. The next IPNC is to be held in Iguazu Falls, Brazil, from 22-27 August 2022. Past and future locations for the IPNC: == References == == External links == IPNC 2017 Official Website IPNC 2013 Official Website IPNC 2009 Official Website
{ "page_id": 53218057, "source": null, "title": "International Plant Nutrition Colloquium" }
Anabaena variabilis is a species of filamentous cyanobacterium. This species of the genus Anabaena and the domain Eubacteria is capable of photosynthesis. This species is heterotrophic, meaning that it may grow without light in the presence of fructose. It also can convert atmospheric dinitrogen to ammonia via nitrogen fixation. Anabaena variabilis is a phylogenic-cousin of the more well-known species Nostoc spirrilum. Both of these species along with many other cyanobacteria are known to form symbiotic relationships with plants. Other cyanobacteria are known to form symbiotic relationships with diatoms, though no such relationship has been observed with Anabaena variabilis. Anabaena variabilis is also a model organism for studying the beginnings of multicellular life due to its filamentous characterization and cellular-differentiation capabilities. == References == Page 12 Aurora by Kim Stanley Robinson. == Further reading == Ungerer, Justin; Brenda Pratte; Teresa Thiel (December 2008). "Regulation of fructose transport and its effect on fructose toxicity in Anabaena spp". Journal of Bacteriology. 190 (24): 8115–25. doi:10.1128/JB.00886-08. PMC 2593219. PMID 18931119. Kaplan, Aaron; Badger, Murray R.; Berry, Joseph A. (1980). "Photosynthesis and the intracellular inorganic carbon pool in the bluegreen alga Anabaena variabilis: Response to external CO2 concentration". Planta. 149 (3): 219–26. doi:10.1007/BF00384557. PMID 24306290. S2CID 20135236. Islam, MS; Drasar, BS; Bradley, DJ (1990). "Long-term persistence of toxigenic Vibrio cholerae 01 in the mucilaginous sheath of a blue-green alga, Anabaena variabilis". The Journal of Tropical Medicine and Hygiene. 93 (2): 133–9. PMID 2109096. Pearce, J.; Leach, C. K.; Carr, N. G. (1969). "The Incomplete Tricarboxylic Acid Cycle in the Blue-green Alga Anabaena Variabilis". Journal of General Microbiology. 55 (3): 371–8. doi:10.1099/00221287-55-3-371. PMID 5783887. Volokita, M.; Zenvirth, D.; Kaplan, A.; Reinhold, L. (1984). "Nature of the Inorganic Carbon Species Actively Taken Up by the Cyanobacterium Anabaena variabilis". Plant Physiology. 76 (3): 599–602. doi:10.1104/pp.76.3.599. PMC 1064339. PMID
{ "page_id": 31525643, "source": null, "title": "Anabaena variabilis" }
16663890.
{ "page_id": 31525643, "source": null, "title": "Anabaena variabilis" }
In biochemistry and nutrition, a monounsaturated fat is a fat that contains a monounsaturated fatty acid (MUFA), a subclass of fatty acid characterized by having a double bond in the fatty acid chain with all of the remaining carbon atoms being single-bonded. By contrast, polyunsaturated fatty acids (PUFAs) have more than one double bond. == Molecular description == Monounsaturated fats are triglycerides containing one unsaturated fatty acid. Almost invariably that fatty acid is oleic acid (18:1 n−9). Palmitoleic acid (16:1 n−7) and cis-vaccenic acid (18:1 n−7) occur in small amounts in fats. == Health == Studies have shown that substituting dietary monounsaturated fat for saturated fat is associated with increased daily physical activity and resting energy expenditure. More physical activity was associated with a higher-oleic acid diet than one of a palmitic acid diet. From the study, it is shown that more monounsaturated fats lead to less anger and irritability. Foods containing monounsaturated fats may affect low-density lipoprotein (LDL) cholesterol and high-density lipoprotein (HDL) cholesterol. Levels of oleic acid along with other monounsaturated fatty acids in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. Monounsaturated fats and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d). In children, consumption of monounsaturated oils is associated with healthier serum lipid profiles. The Mediterranean diet is one heavily influenced by monounsaturated fats. In the late 20th century, people in Mediterranean countries consumed more total fat than Northern European countries, but most of the fat was in the form of monounsaturated fatty acids from olive oil and omega-3 fatty acids from fish, vegetables, and certain meats like lamb, while consumption
{ "page_id": 1051404, "source": null, "title": "Monounsaturated fat" }
of saturated fat was minimal in comparison. A 2017 review found evidence that the practice of a Mediterranean diet could lead to a decreased risk of cardiovascular diseases, overall cancer incidence, neurodegenerative diseases, diabetes, and early death. A 2018 review showed that the practice of the Mediterranean diet may improve overall health status, such as the reduced risk of non-communicable diseases. It also may reduce the social and economic costs of diet-related illnesses. === Diabetes === Increasing monounsaturated fat and decreasing saturated fat intake could improve insulin sensitivity, but only when the overall fat intake of the diet was low. However, some monounsaturated fatty acids (in the same way as saturated fats) may promote insulin resistance, whereas polyunsaturated fatty acids may be protective against insulin resistance. == Sources == Monounsaturated fats are found in animal flesh such as red meat, whole milk products, nuts, and high fat fruits such as olives and avocados. Algal oil is about 92% monounsaturated fat. Olive oil is about 75% monounsaturated fat. The high oleic variety sunflower oil contains at least 70% monounsaturated fat. Canola oil and cashews are both about 58% monounsaturated fat. Tallow (beef fat) is about 50% monounsaturated fat. and lard is about 40% monounsaturated fat. Other sources include hazelnut, avocado oil, macadamia nut oil, grapeseed oil, groundnut oil (peanut oil), sesame oil, corn oil, popcorn, whole grain wheat, cereal, oatmeal, almond oil, sunflower oil, hemp oil, and tea-oil Camellia. == See also == High density lipoprotein Fatty acid synthesis == References == == External links == Fats (Mayo Clinic) The Chemistry of Unsaturated Fats
{ "page_id": 1051404, "source": null, "title": "Monounsaturated fat" }
Biology: The Unity and Diversity of Life is an introductory textbook of biology, for students. The fifteenth edition was published in 2019, by Cengage Learning. It was compiled by Cecie Starr and Ralph Taggart with pictures and illustrations by Lisa Starr. Its contents include concepts in molecular biology and biochemistry, genetics, biotechnology, reproduction and embryonic development, anatomy and physiology of plants and animals, evolution, taxonomy, and ecology. == References ==
{ "page_id": 5049099, "source": null, "title": "Biology: The Unity and Diversity of Life" }
Adaptive type – in evolutionary biology – is any population or taxon which have the potential for a particular or total occupation of given free of underutilized home habitats or position in the general economy of nature. In evolutionary sense, the emergence of new adaptive type is usually a result of adaptive radiation certain groups of organisms in which they arise categories that can effectively exploit temporary, or new conditions of the environment. Such evolutive units with its distinctive – morphological and anatomical, physiological and other characteristics, i.e. genetic and adjustments (feature) have a predisposition for an occupation certain home habitats or position in the general nature economy. Simply, the adaptive type is one group organisms whose general biological properties represent a key to open the entrance to the observed adaptive zone in the observed natural ecological complex. Adaptive types are spatially and temporally specific. Since the frames of general biological properties these types of substantially genetic are defined between, in effect the emergence of new adaptive types of the corresponding change in population genetic structure and eternal contradiction between the need for optimal adapted well the conditions of living environment, while maintaining genetic variation for survival in a possible new circumstances. For example, the specific place in the economy of nature existed millions of years before the appearance of human type. However, just when the process of evolution of primates (order Primates) reached a level that is able to occupy that position, it is open, and then (in leaving world) an unprecedented acceleration increasingly spreading. Culture, in the broadest sense, is a key adaptation of adaptive type type of Homo sapiens the occupation of existing adaptive zone through work, also in the broadest sense of the term. == References == == See also == Adaptive zone Adaptation Progressive evolution
{ "page_id": 53742353, "source": null, "title": "Adaptive type" }
Speciation
{ "page_id": 53742353, "source": null, "title": "Adaptive type" }
Eugeniusz Kwiatkowski Monument (Polish: Pomnik Eugeniusza Kwiatkowskiego) is a sculpture in Warsaw, Poland, located in the Royal Baths Park, within the neighbourhood of Ujazdów in the Downtown district. The monument has a form a bronze bust of Eugeniusz Kwiatkowski, a 20th-century politician, chemist, and economist, who was a minister of industry and trade, minister of treasury, and the deputy prime minister of Poland. The sculpture is placed outside the Myślewice Palace. It was designed by Andrzej Renes, and unveiled on 24 June 2002. == History == The monument was dedicated to Eugeniusz Kwiatkowski, a 20th-century politician, chemist, and economist, who was a minister of industry and trade, minister of treasury, and the deputy prime minister of Poland. During his term, he initiated the creation of the Central Industrial Region and the expansion of the Port of Gdynia. The monument was designed by Andrzej Renes, and unveiled on 24 June 2002. It was part of the celebrations of the "year of Eugeniusz Kwiatkowski", designated as such by the Seym of Poland. The sculpture was unveiled by Ewa Kwiatkowska, Eugeniusz's daughter, and Tomasz Nałęcz, deputy marshal of the Seym. The monument was financed by Zbigniew Jakubas, a boss of the company Multico. == Characteristics == The monument consists of a bronze bust of Eugeniusz Kwiatkowski, in a suit and with a bowtie, placed on a stone pedestal. It is located in the Royal Baths Park, within the neighbourhood of Ujazdów in the Downtown district. The sculpture is placed outside the Myślewice Palace, which used to be Kwiatkowski's residence while he was in office. == References ==
{ "page_id": 78318353, "source": null, "title": "Eugeniusz Kwiatkowski Monument" }
Superheavy elements, also known as transactinide elements, transactinides, or super-heavy elements, or superheavies for short, are the chemical elements with atomic number greater than 104. The superheavy elements are those beyond the actinides in the periodic table; the last actinide is lawrencium (atomic number 103). By definition, superheavy elements are also transuranium elements, i.e., having atomic numbers greater than that of uranium (92). Depending on the definition of group 3 adopted by authors, lawrencium may also be included to complete the 6d series. Glenn T. Seaborg first proposed the actinide concept, which led to the acceptance of the actinide series. He also proposed a transactinide series ranging from element 104 to 121 and a superactinide series approximately spanning elements 122 to 153 (though more recent work suggests the end of the superactinide series to occur at element 157 instead). The transactinide seaborgium was named in his honor. Superheavies are radioactive and have only been obtained synthetically in laboratories. No macroscopic sample of any of these elements has ever been produced. Superheavies are all named after physicists and chemists or important locations involved in the synthesis of the elements. IUPAC defines an element to exist if its lifetime is longer than 10−14 second, which is the time it takes for the atom to form an electron cloud. The known superheavies form part of the 6d and 7p series in the periodic table. Except for rutherfordium and dubnium (and lawrencium if it is included), all known isotopes of superheavies have half-lives of minutes or less. The element naming controversy involved elements 102–109. Some of these elements thus used systematic names for many years after their discovery was confirmed. (Usually the systematic names are replaced with permanent names proposed by the discoverers relatively soon after a discovery has been confirmed.) == Introduction ==
{ "page_id": 2231059, "source": null, "title": "Superheavy element" }
=== Synthesis of superheavy nuclei === A superheavy atomic nucleus is created in a nuclear reaction that combines two other nuclei of unequal size into one; roughly, the more unequal the two nuclei in terms of mass, the greater the possibility that the two react. The material made of the heavier nuclei is made into a target, which is then bombarded by the beam of lighter nuclei. Two nuclei can only fuse into one if they approach each other closely enough; normally, nuclei (all positively charged) repel each other due to electrostatic repulsion. The strong interaction can overcome this repulsion but only within a very short distance from a nucleus; beam nuclei are thus greatly accelerated in order to make such repulsion insignificant compared to the velocity of the beam nucleus. The energy applied to the beam nuclei to accelerate them can cause them to reach speeds as high as one-tenth of the speed of light. However, if too much energy is applied, the beam nucleus can fall apart. Coming close enough alone is not enough for two nuclei to fuse: when two nuclei approach each other, they usually remain together for about 10−20 seconds and then part ways (not necessarily in the same composition as before the reaction) rather than form a single nucleus. This happens because during the attempted formation of a single nucleus, electrostatic repulsion tears apart the nucleus that is being formed. Each pair of a target and a beam is characterized by its cross section—the probability that fusion will occur if two nuclei approach one another expressed in terms of the transverse area that the incident particle must hit in order for the fusion to occur. This fusion may occur as a result of the quantum effect in which nuclei can tunnel through electrostatic repulsion.
{ "page_id": 2231059, "source": null, "title": "Superheavy element" }
If the two nuclei can stay close past that phase, multiple nuclear interactions result in redistribution of energy and an energy equilibrium. The resulting merger is an excited state—termed a compound nucleus—and thus it is very unstable. To reach a more stable state, the temporary merger may fission without formation of a more stable nucleus. Alternatively, the compound nucleus may eject a few neutrons, which would carry away the excitation energy; if the latter is not sufficient for a neutron expulsion, the merger would produce a gamma ray. This happens in about 10−16 seconds after the initial nuclear collision and results in creation of a more stable nucleus. The definition by the IUPAC/IUPAP Joint Working Party (JWP) states that a chemical element can only be recognized as discovered if a nucleus of it has not decayed within 10−14 seconds. This value was chosen as an estimate of how long it takes a nucleus to acquire electrons and thus display its chemical properties. === Decay and detection === The beam passes through the target and reaches the next chamber, the separator; if a new nucleus is produced, it is carried with this beam. In the separator, the newly produced nucleus is separated from other nuclides (that of the original beam and any other reaction products) and transferred to a surface-barrier detector, which stops the nucleus. The exact location of the upcoming impact on the detector is marked; also marked are its energy and the time of the arrival. The transfer takes about 10−6 seconds; in order to be detected, the nucleus must survive this long. The nucleus is recorded again once its decay is registered, and the location, the energy, and the time of the decay are measured. Stability of a nucleus is provided by the strong interaction. However, its range
{ "page_id": 2231059, "source": null, "title": "Superheavy element" }
is very short; as nuclei become larger, its influence on the outermost nucleons (protons and neutrons) weakens. At the same time, the nucleus is torn apart by electrostatic repulsion between protons, and its range is not limited. Total binding energy provided by the strong interaction increases linearly with the number of nucleons, whereas electrostatic repulsion increases with the square of the atomic number, i.e. the latter grows faster and becomes increasingly important for heavy and superheavy nuclei. Superheavy nuclei are thus theoretically predicted and have so far been observed to predominantly decay via decay modes that are caused by such repulsion: alpha decay and spontaneous fission. Almost all alpha emitters have over 210 nucleons, and the lightest nuclide primarily undergoing spontaneous fission has 238. In both decay modes, nuclei are inhibited from decaying by corresponding energy barriers for each mode, but they can be tunneled through. Alpha particles are commonly produced in radioactive decays because the mass of an alpha particle per nucleon is small enough to leave some energy for the alpha particle to be used as kinetic energy to leave the nucleus. Spontaneous fission is caused by electrostatic repulsion tearing the nucleus apart and produces various nuclei in different instances of identical nuclei fissioning. As the atomic number increases, spontaneous fission rapidly becomes more important: spontaneous fission partial half-lives decrease by 23 orders of magnitude from uranium (element 92) to nobelium (element 102), and by 30 orders of magnitude from thorium (element 90) to fermium (element 100). The earlier liquid drop model thus suggested that spontaneous fission would occur nearly instantly due to disappearance of the fission barrier for nuclei with about 280 nucleons. The later nuclear shell model suggested that nuclei with about 300 nucleons would form an island of stability in which nuclei will be more
{ "page_id": 2231059, "source": null, "title": "Superheavy element" }
resistant to spontaneous fission and will primarily undergo alpha decay with longer half-lives. Subsequent discoveries suggested that the predicted island might be further than originally anticipated; they also showed that nuclei intermediate between the long-lived actinides and the predicted island are deformed, and gain additional stability from shell effects. Experiments on lighter superheavy nuclei, as well as those closer to the expected island, have shown greater than previously anticipated stability against spontaneous fission, showing the importance of shell effects on nuclei. Alpha decays are registered by the emitted alpha particles, and the decay products are easy to determine before the actual decay; if such a decay or a series of consecutive decays produces a known nucleus, the original product of a reaction can be easily determined. (That all decays within a decay chain were indeed related to each other is established by the location of these decays, which must be in the same place.) The known nucleus can be recognized by the specific characteristics of decay it undergoes such as decay energy (or more specifically, the kinetic energy of the emitted particle). Spontaneous fission, however, produces various nuclei as products, so the original nuclide cannot be determined from its daughters. The information available to physicists aiming to synthesize a superheavy element is thus the information collected at the detectors: location, energy, and time of arrival of a particle to the detector, and those of its decay. The physicists analyze this data and seek to conclude that it was indeed caused by a new element and could not have been caused by a different nuclide than the one claimed. Often, provided data is insufficient for a conclusion that a new element was definitely created and there is no other explanation for the observed effects; errors in interpreting data have been made.
{ "page_id": 2231059, "source": null, "title": "Superheavy element" }
== History == === Early predictions === The heaviest element known at the end of the 19th century was uranium, with an atomic mass of about 240 (now known to be 238) amu. Accordingly, it was placed in the last row of the periodic table; this fueled speculation about the possible existence of elements heavier than uranium and why A = 240 seemed to be the limit. Following the discovery of the noble gases, beginning with argon in 1895, the possibility of heavier members of the group was considered. Danish chemist Julius Thomsen proposed in 1895 the existence of a sixth noble gas with Z = 86, A = 212 and a seventh with Z = 118, A = 292, the last closing a 32-element period containing thorium and uranium. In 1913, Swedish physicist Johannes Rydberg extended Thomsen's extrapolation of the periodic table to include even heavier elements with atomic numbers up to 460, but he did not believe that these superheavy elements existed or occurred in nature. In 1914, German physicist Richard Swinne proposed that elements heavier than uranium, such as those around Z = 108, could be found in cosmic rays. He suggested that these elements may not necessarily have decreasing half-lives with increasing atomic number, leading to speculation about the possibility of some longer-lived elements at Z = 98–102 and Z = 108–110 (though separated by short-lived elements). Swinne published these predictions in 1926, believing that such elements might exist in Earth's core, iron meteorites, or the ice caps of Greenland where they had been locked up from their supposed cosmic origin. === Discoveries === Work performed from 1961 to 2013 at four labs – Lawrence Berkeley National Laboratory in the US, the Joint Institute for Nuclear Research in the USSR (later Russia), the GSI Helmholtz Centre
{ "page_id": 2231059, "source": null, "title": "Superheavy element" }
for Heavy Ion Research in Germany, and Riken in Japan – identified and confirmed the elements lawrencium to oganesson according to the criteria of the IUPAC–IUPAP Transfermium Working Groups and subsequent Joint Working Parties. These discoveries complete the seventh row of the periodic table. The next two elements, ununennium (Z = 119) and unbinilium (Z = 120), have not yet been synthesized. They would begin an eighth period. === List of elements === 103 Lawrencium, Lr, for Ernest Lawrence; sometimes but not always included 104 Rutherfordium, Rf, for Ernest Rutherford 105 Dubnium, Db, for the town of Dubna, near Moscow 106 Seaborgium, Sg, for Glenn T. Seaborg 107 Bohrium, Bh, for Niels Bohr 108 Hassium, Hs, for Hassia (Hesse), location of Darmstadt 109 Meitnerium, Mt, for Lise Meitner 110 Darmstadtium, Ds, for Darmstadt) 111 Roentgenium, Rg, for Wilhelm Röntgen 112 Copernicium, Cn, for Nicolaus Copernicus 113 Nihonium, Nh, for Nihon (Japan), location of the Riken institute 114 Flerovium, Fl, for Russian physicist Georgy Flyorov 115 Moscovium, Mc, for Moscow 116 Livermorium, Lv, for Lawrence Livermore National Laboratory 117 Tennessine, Ts, for Tennessee, location of Oak Ridge National Laboratory 118 Oganesson, Og, for Russian physicist Yuri Oganessian == Characteristics == Due to their short half-lives (for example, the most stable known isotope of seaborgium has a half-life of 14 minutes, and half-lives decrease with increasing atomic number) and the low yield of the nuclear reactions that produce them, new methods have had to be created to determine their gas-phase and solution chemistry based on very small samples of a few atoms each. Relativistic effects become very important in this region of the periodic table, causing the filled 7s orbitals, empty 7p orbitals, and filling 6d orbitals to all contract inward toward the atomic nucleus. This causes a relativistic stabilization of the
{ "page_id": 2231059, "source": null, "title": "Superheavy element" }
7s electrons and makes the 7p orbitals accessible in low excitation states. Elements 103 to 112, lawrencium to copernicium, form the 6d series of transition elements. Experimental evidence shows that elements 103–108 behave as expected for their position in the periodic table, as heavier homologs of lutetium through osmium. They are expected to have ionic radii between those of their 5d transition metal homologs and their actinide pseudohomologs: for example, Rf4+ is calculated to have ionic radius 76 pm, between the values for Hf4+ (71 pm) and Th4+ (94 pm). Their ions should also be less polarizable than those of their 5d homologs. Relativistic effects are expected to reach a maximum at the end of this series, at roentgenium (element 111) and copernicium (element 112). Nevertheless, many important properties of the transactinides are still not yet known experimentally, though theoretical calculations have been performed. Elements 113 to 118, nihonium to oganesson, should form a 7p series, completing the seventh period in the periodic table. Their chemistry will be greatly influenced by the very strong relativistic stabilization of the 7s electrons and a strong spin–orbit coupling effect "tearing" the 7p subshell apart into two sections, one more stabilized (7p1/2, holding two electrons) and one more destabilized (7p3/2, holding four electrons). Lower oxidation states should be stabilized here, continuing group trends, as both the 7s and 7p1/2 electrons exhibit the inert-pair effect. These elements are expected to largely continue to follow group trends, though with relativistic effects playing an increasingly larger role. In particular, the large 7p splitting results in an effective shell closure at flerovium (element 114) and a hence much higher than expected chemical activity for oganesson (element 118). Oganesson is the last known element. The next two elements, 119 and 120, should form an 8s series and be an
{ "page_id": 2231059, "source": null, "title": "Superheavy element" }
alkali and alkaline earth metal respectively. The 8s electrons are expected to be relativistically stabilized, so that the trend toward higher reactivity down these groups will reverse and the elements will behave more like their period 5 homologs, rubidium and strontium. The 7p3/2 orbital is still relativistically destabilized, potentially giving these elements larger ionic radii and perhaps even being able to participate chemically. In this region, the 8p electrons are also relativistically stabilized, resulting in a ground-state 8s28p1 valence electron configuration for element 121. Large changes are expected to occur in the subshell structure in going from element 120 to element 121: for example, the radius of the 5g orbitals should drop drastically, from 25 Bohr units in element 120 in the excited [Og] 5g1 8s1 configuration to 0.8 Bohr units in element 121 in the excited [Og] 5g1 7d1 8s1 configuration, in a phenomenon called "radial collapse". Element 122 should add either a further 7d or a further 8p electron to element 121's electron configuration. Elements 121 and 122 should be similar to actinium and thorium respectively. At element 121, the superactinide series is expected to begin, when the 8s electrons and the filling 8p1/2, 7d3/2, 6f5/2, and 5g7/2 subshells determine the chemistry of these elements. Complete and accurate calculations are not available for elements beyond 123 because of the extreme complexity of the situation: the 5g, 6f, and 7d orbitals should have about the same energy level, and in the region of element 160 the 9s, 8p3/2, and 9p1/2 orbitals should also be about equal in energy. This will cause the electron shells to mix so that the block concept no longer applies very well, and will also result in novel chemical properties that will make positioning these elements in a periodic table very difficult. == Beyond superheavy
{ "page_id": 2231059, "source": null, "title": "Superheavy element" }
elements == It has been suggested that elements beyond Z = 126 be called beyond superheavy elements. Other sources refer to elements around Z = 164 as hyperheavy elements. == See also == Bose–Einstein condensate (also known as Superatom) Island of stability == Notes == == References == === Bibliography === Audi, G.; Kondev, F. G.; Wang, M.; et al. (2017). "The NUBASE2016 evaluation of nuclear properties". Chinese Physics C. 41 (3). 030001. Bibcode:2017ChPhC..41c0001A. doi:10.1088/1674-1137/41/3/030001. pp. 030001-1–030001-17, pp. 030001-18–030001-138, Table I. The NUBASE2016 table of nuclear and decay properties Beiser, A. (2003). Concepts of modern physics (6th ed.). McGraw-Hill. ISBN 978-0-07-244848-1. OCLC 48965418. Hoffman, D. C.; Ghiorso, A.; Seaborg, G. T. (2000). The Transuranium People: The Inside Story. World Scientific. ISBN 978-1-78-326244-1. Kragh, H. (2018). From Transuranic to Superheavy Elements: A Story of Dispute and Creation. Springer. ISBN 978-3-319-75813-8. Zagrebaev, V.; Karpov, A.; Greiner, W. (2013). "Future of superheavy element research: Which nuclei could be synthesized within the next few years?". Journal of Physics: Conference Series. 420 (1). 012001. arXiv:1207.5700. Bibcode:2013JPhCS.420a2001Z. doi:10.1088/1742-6596/420/1/012001. ISSN 1742-6588.
{ "page_id": 2231059, "source": null, "title": "Superheavy element" }
Stephan Swanson came to prominence as a marine researcher when he successfully placed the satellite transmitter on the famous Great white shark Nicole, the first great white shark ever to be tracked on a 20,000 kilometer migration from South Africa to Australia and back. Due to his ability to handle large marine predators, such as the great white shark, he was contracted as an expedition biologist to travel to Guadeloupe and place satellite transmitters on the dorsal fins of Great Whites. His historical capture and release of a 5m long, 1800 kilogram great white shark is documented in the National Geographic Marine Special "Ultimate Shark". Swanson is currently co-owner of False Bay White Shark Adventures (trading as Shark Explorers) which was established in 2008. They provide shark scuba diving excursions. == Scientific Articles == Developing techniques and procedures for large shark telemetry White shark abundance not a causative factor in shark attack incidence White shark cage diving - Cause for concern? Transoceanic migrations of a white shark == Television appearances == Ultimate Shark - National Geographic Sharkville - National Geographic archived from the original at the Wayback Machine Pasella - SABC2 == References == == External links == Stephan Swanson at ResearchGate Project To Monitor The Great White Shark archived from the original at the Wayback Machine Transoceanic Migration, Spatial Dynamics, and Population Linkages of White Sharks Shark Nicole to Australia and Back
{ "page_id": 18877204, "source": null, "title": "Stephan Swanson" }
Laboratory of Solid State Microstructure (LSSMS, Chinese: 固体微结构物理国家重点实验室) is located in Nanjing University, China. It is a key laboratory in physics, associated with such faculties as schools of physics and electronics and department of materials of engineering school at Nanjing University. The Laboratory has accomplished many achievements and enjoys international fame. Nature magazine listed it as one of the two best research groups approaching/with world-class standards in East Asia apart from Japan. The Institute for Scientific Information listed it as the No. 1 laboratory in China as published in Science magazine. == History == In 1984, Nanjing University Institute of Solid State Physics was changed to State Key Laboratory of Solid State Microstructures of Nanjing University, which was mainly associated with the Department of Physics of Nanjing University at the time. Nanjing National Laboratory of Microstructures, which mainly based upon LSSMS and LCC (State Key Laboratory of Coordination Chemistry) at Nanjing University, was formally started to establish in 2006, with estimated investment of RMB 300 million, and before that, in 2004, NU received endowment of RMB 50 million from Cyrus Tang Foundation for its establishment, and the National Microstructures Laboratory Building - Cyrus Tang Building, was completed in 2007. == Research areas == Physics of microstructured dielectric materials Nano-structured materials and physics Aggregations and pattern formation under non-equilibrium conditions Dynamics of microstructural assembly and modulation Strong correlation effect in solids Phase transitions Other related microstructural physics in solids == Notable scientists == Feng Duan Ming Naiben == Notes ==
{ "page_id": 3017492, "source": null, "title": "Laboratory of Solid State Microstructure, Nanjing University" }
Cytochrome P450, family 26, also known as CYP26, is an mammal cytochrome P450 monooxygenase family found in human genome. There are three members in the human genome, CYP26A1, CYP26B1 and CYP26C1. Synteny mapping of CYP26 family members showing linkages to CYP16 family members of many invertebrates, means the tetrapod's CYP26 may evolved from CYP16 of fish. == References ==
{ "page_id": 71174935, "source": null, "title": "CYP26 family" }
Écriture féminine, or "women's writing", is a term coined by French feminist and literary theorist Hélène Cixous in her 1975 essay "The Laugh of the Medusa". Cixous aimed to establish a genre of literary writing that deviates from traditional masculine styles of writing, one which examines the relationship between the cultural and psychological inscription of the female body and female difference in language and text. This strand of feminist literary theory originated in France in the early 1970s through the works of Cixous and other theorists including Luce Irigaray, Chantal Chawaf, Catherine Clément and Julia Kristeva, and has subsequently been expanded upon by writers such as psychoanalytic theorist Bracha Ettinger. who emerged in this field in the early 1990s, Écriture féminine as a theory foregrounds the importance of language for the psychic understanding of self. Cixous is searching for what Isidore Isou refers to as the "hidden signifer" in language which expresses the ineffable and what cannot be expressed in structuralist language. It has been suggested by Cixous herself that more free and flowing styles of writing such as stream of consciousness, have a more "feminine" structure and tone than that of more traditional modes of writing. This theory draws on ground theory work in psychoanalysis about the way that humans come to understand their social roles. In doing so, it goes on to expound how women, who may be positioned as 'other' in a masculine symbolic order, can reaffirm their understanding of the world through engaging with their own otherness, both within and outside their own minds, or consciousness. == Cixous == Hélène Cixous first coined écriture féminine in her essay "The Laugh of the Medusa" (1975), where she asserts "woman must write her self: must write about women and bring women to writing, from which they have been
{ "page_id": 854811, "source": null, "title": "Écriture féminine" }
driven away as violently as from their bodies" because their sexual pleasure has been repressed and denied expression. Inspired by Cixous' essay, a recent book titled Laughing with Medusa (2006) analyzes the collective work of Julia Kristeva, Luce Irigaray, Bracha Ettinger and Hélène Cixous. These writers are as a whole referred to by Anglophones as "the French feminists," though Mary Klages, Associate Professor in the English Department at the University of Colorado at Boulder, has pointed out that "poststructuralist theoretical feminists" would be a more accurate term. Madeleine Gagnon is a more recent proponent. And since the aforementioned 1975 when Cixous also founded women's studies at Vincennes, she has been as a spokeswoman for the group Psychanalyse et politique and a prolific writer of texts for their publishing house, des femmes. And when asked of her own writing she says, "Je suis là où ça parle" ("I am there where it/id/the female unconscious speaks.") American feminist critic and writer Elaine Showalter defines this movement as "the inscription of the feminine body and female difference in language and text." Écriture féminine places experience before language, and privileges non-linear, cyclical writing that evades "the discourse that regulates the phallocentric system." Because language is not a neutral medium, it can be said to function as an instrument of patriarchal expression. As Peter Barry writes, "the female writer is seen as suffering the handicap of having to use a medium (prose writing) which is essentially a male instrument fashioned for male purposes". Ecriture féminine thus exists as an antithesis of masculine writing or as a means of escape for women. In the words of Rosemarie Tong, "Cixous challenged women to write themselves out of the world men constructed for women. She urged women to put themselves-the unthinkable/unthought-into words." Almost everything is yet to be
{ "page_id": 854811, "source": null, "title": "Écriture féminine" }
written by women about femininity: about their sexuality, that is, its infinite and mobile complexity; about their eroticization, sudden turn-ons of a certain minuscule-immense area of their bodies; not about destiny, but about the adventure of such and such a drive, about trips, crossings, trudges, abrupt and gradual awakenings, discoveries of a zone at once timorous and soon to be forthright. With regard to phallogocentric writing, Tong argues that "male sexuality, which centers on what Cixous called the "big dick", is ultimately boring in its pointedness and singularity. Like male sexuality, masculine writing, which Cixous usually termed phallogocentric writing, is also ultimately boring" and furthermore, that "stamped with the official seal of social approval, masculine writing is too weighted down to move or change". Write, let no one hold you back, let nothing stop you: not man; not the imbecilic capitalist machinery, in which the publishing houses are the crafty, obsequious relayers of imperatives handed down by an economy that works against us and off our backs; not yourself. Smug-faced readers, managing editors, and big bosses don't like the true texts of women- female-sexed texts. That kind scares them. For Cixous, écriture féminine is not only a possibility for female writers; rather, she believes it can be (and has been) employed by male authors such as James Joyce or Jean Genet. Some have found this idea difficult to reconcile with Cixous' definition of écriture féminine (often termed 'white ink') because of the many references she makes to the female body ("There is always in her at least a little of that good mother's milk. She writes in white ink".) when characterizing the essence of écriture féminine and explaining its origin. This notion raises problems for some theorists: "Ecriture féminine, then, is by its nature transgressive, rule-transcending, intoxicated, but it is
{ "page_id": 854811, "source": null, "title": "Écriture féminine" }
clear that the notion as put forward by Cixous raises many problems. The realm of the body, for instance, is seen as somehow immune to social and gender condition and able to issue forth a pure essence of the feminine. Such essentialism is difficult to square with feminism which emphasizes femininity as a social construction..." == Irigaray and Kristeva == For Luce Irigaray, women's sexual pleasure jouissance cannot be expressed by the dominant, ordered, "logical," masculine language because, according to Kristeva, feminine language is derived from the pre-oedipal period of fusion between mother and child which she termed the semiotic. Associated with the maternal, feminine language (which Irigaray called parler femme, womanspeak) is not only a threat to culture, which is patriarchal, but also a medium through which women may be creative in new ways. Irigaray expressed this connection between women's sexuality and women's language through the following analogy: women's jouissance is more multiple than men's unitary, phallic pleasure because "woman has sex organs just about everywhere...feminine language is more diffusive than its 'masculine counterpart'. That is undoubtedly the reason...her language...goes off in all directions and...he is unable to discern the coherence." Irigaray and Cixous also go on to emphasize that women, historically limited to being sexual objects for men (virgins or prostitutes, wives or mothers), have been prevented from expressing their sexuality in itself or for themselves. If they can do this, and if they can speak about it in the new languages it calls for, they will establish a point of view (a site of difference) from which phallogocentric concepts and controls can be seen through and taken apart, not only in theory, but also in practice. == Ettinger == Bracha L. Ettinger invented a field of notions and concepts to address and become aware of affects, feeling
{ "page_id": 854811, "source": null, "title": "Écriture féminine" }
and trans-subjective connectivity that originates in the subject and humanizes her and him, according to Ettinger, via the feminine sexuality, pre-maternal experiences and maternal potentiality. Ettinger's language, developed slowly from 1985 and until now in poetic writing in artist's books and in academic writing, includes her original concepts like: matrixial time-space, matrixial space, metramorphosis, com-passion, coemergence, cofading, copoiesis, wit(h)nessing, fascinance, carriance, psychic pregnance, distance-in-proximity, borderlnking, borderspacing, proximity-in-distance, matrixial feminine/prenatal Encounter-event and ethical seduction-into-life. Many writers in the fields of film theory, psychoanalysis, ethics, aesthetics, Literature studies, Contemporary art and Art History are using the Ettingerian matrixial sphere (matricial sphere) in their analysis of contemporary and historical material. == Critiques == The approach through language to feminist action has been criticised by some as over-theoretical: they would see the fact that the very first meeting of a handful of would-be feminist activists in 1970 only managed to launch an acrimonious theoretical debate as marking the situation as typically 'French' in its apparent insistence on the primacy of theory over politics. Nonetheless, in practice the French women's movement developed in much the same way as the feminist movements elsewhere in Europe or in the United States: French women participated in consciousness-raising groups; demonstrated in the streets on the 8 March; fought hard for women's right to choose whether to have children; raised the issue of violence against women; and struggled to change public opinion on issues concerning women and women's rights. Further criticisms of écriture féminine include what some claim is an essentialist view of the body and the consequential reliance on a feminism of 'difference' which, according to Diana Holmes, for instance, tends to "demonize masculinity as the repository of all that (at least from a post-'68, broadly Left perspective) is negative." It also, says Holmes in French Women's Writings, 1848-1994
{ "page_id": 854811, "source": null, "title": "Écriture féminine" }
(1996), would exclude much of women's writing from the feminist canon. == Literary examples == As a result of the difficulties inherent in the notion of "écriture féminine", very few books of literary criticism have run the risk of using it as a critical tool. A. S. Byatt offers: "There is a marine and salty female wave-water to be...read as a symbol of female language, which is partly suppressed, partly self-communing, dumb before the intruding male and not able to speak out...thus mirroring those female secretions which are not inscribed in our daily use of language (langue, tongue)". == See also == Assia Djebar Gynocriticism Postmodern feminism == Notes == == External links == "The Laugh of the Medusa" Resource Page Writing the Body: Toward an Understanding of l'Écriture féminine Strategies of Difference and Opposition Hélène Cixous' writing strategy of écriture féminine. 'Feminist Theory - An Overview'
{ "page_id": 854811, "source": null, "title": "Écriture féminine" }
Atomic theory is the scientific theory that matter is composed of particles called atoms. The definition of the word "atom" has changed over the years in response to scientific discoveries. Initially, it referred to a hypothetical concept of there being some fundamental particle of matter, too small to be seen by the naked eye, that could not be divided. Then the definition was refined to being the basic particles of the chemical elements, when chemists observed that elements seemed to combine with each other in ratios of small whole numbers. Then physicists discovered that these particles had an internal structure of their own and therefore perhaps did not deserve to be called "atoms", but renaming atoms would have been impractical by that point. Atomic theory is one of the most important scientific developments in history, crucial to all the physical sciences. At the start of The Feynman Lectures on Physics, physicist and Nobel laureate Richard Feynman offers the atomic hypothesis as the single most prolific scientific concept. == Philosophical atomism == The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. The word atom is derived from the ancient Greek word atomos, which means "uncuttable". This ancient idea was based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts. In the early 19th century, the scientist John Dalton noticed that chemical substances seemed to combine with each other by discrete and consistent units of weight, and he decided to use the word atom to refer to these units. == Groundwork == Working in the late 17th century, Robert Boyle developed the concept of a chemical element as substance different from a compound.: 293 Near the end of the 18th century,
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
a number of important developments in chemistry emerged without referring to the notion of an atomic theory. The first was Antoine Lavoisier who showed that compounds consist of elements in constant proportion, redefining an element as a substance which scientists could not decompose into simpler substances by experimentation. This brought an end to the ancient idea of the elements of matter being fire, earth, air, and water, which had no experimental support. Lavoisier showed that water can be decomposed into hydrogen and oxygen, which in turn he could not decompose into anything simpler, thereby proving these are elements. Lavoisier also defined the law of conservation of mass, which states that in a chemical reaction, matter does not appear nor disappear into thin air; the total mass remains the same even if the substances involved were transformed.: 293 Finally, there was the law of definite proportions, established by the French chemist Joseph Proust in 1797, which states that if a compound is broken down into its constituent chemical elements, then the masses of those constituents will always have the same proportions by weight, regardless of the quantity or source of the original compound. This definition distinguished compounds from mixtures. == Dalton's law of multiple proportions == John Dalton studied data gathered by himself and by other scientists. He noticed a pattern that later came to be known as the law of multiple proportions: in compounds which contain two particular elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This suggested that each element combines with other elements in multiples of a basic quantity. In 1804, Dalton explained his atomic theory to his friend and fellow chemist Thomas Thomson, who published an explanation of Dalton's theory in his book
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
A System of Chemistry in 1807. According to Thomson, Dalton's idea first occurred to him when experimenting with "olefiant gas" (ethylene) and "carburetted hydrogen gas" (methane). Dalton found that "carburetted hydrogen gas" contains twice as much hydrogen per measure of carbon as "olefiant gas", and concluded that a molecule of "olefiant gas" is one carbon atom and one hydrogen atom, and a molecule of "carburetted hydrogen gas" is one carbon atom and two hydrogen atoms. In reality, an ethylene molecule has two carbon atoms and four hydrogen atoms (C2H4), and a methane molecule has one carbon atom and four hydrogen atoms (CH4). In this particular case, Dalton was mistaken about the formulas of these compounds, but he got them right in the following examples: Example 1 — tin oxides: Dalton identified two types of tin oxide. One is a grey powder that Dalton referred to as "the protoxide of tin", which is 88.1% tin and 11.9% oxygen. The other is a white powder which Dalton referred to as "the deutoxide of tin", which is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. These compounds are known today as tin(II) oxide (SnO) and tin(IV) oxide (SnO2). In Dalton's terminology, a "protoxide" is a molecule containing a single oxygen atom, and a "deutoxide" molecule has two. The modern equivalents of his terms would be monoxide and dioxide. Example 2 — iron oxides: Dalton identified two oxides of iron. There is one type of iron oxide that is a black powder which Dalton referred to as "the protoxide of iron",
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
which is 78.1% iron and 21.9% oxygen. The other iron oxide is a red powder, which Dalton referred to as "the intermediate or red oxide of iron" which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. These compounds are iron(II) oxide and iron(III) oxide and their formulas are FeO and Fe2O3 respectively. Iron(II) oxide's formula is normally written as FeO, but since it is a crystalline substance one could alternately write it as Fe2O2, and when we contrast that with Fe2O3, the 2:3 ratio stands out plainly. Dalton described the "intermediate oxide" as being "2 atoms protoxide and 1 of oxygen", which adds up to two atoms of iron and three of oxygen. That averages to one and a half atoms of oxygen for every iron atom, putting it midway between a "protoxide" and a "deutoxide". Example 3 — nitrogen oxides: Dalton was aware of three oxides of nitrogen: "nitrous oxide", "nitrous gas", and "nitric acid". These compounds are known today as nitrous oxide, nitric oxide, and nitrogen dioxide respectively. "Nitrous oxide" is 63.3% nitrogen and 36.7% oxygen, which means it has 80 g of oxygen for every 140 g of nitrogen. "Nitrous gas" is 44.05% nitrogen and 55.95% oxygen, which means there is 160 g of oxygen for every 140 g of nitrogen. "Nitric acid" is 29.5% nitrogen and 70.5% oxygen, which means it has 320 g of oxygen for every 140 g of nitrogen. 80 g, 160 g, and 320 g form a ratio of 1:2:4. The formulas for these compounds are N2O, NO,
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
and NO2. Dalton defined an atom as being the "ultimate particle" of a chemical substance, and he used the term "compound atom" to refer to "ultimate particles" which contain two or more elements. This is inconsistent with the modern definition, wherein an atom is the basic particle of a chemical element and a molecule is an agglomeration of atoms. The term "compound atom" was confusing to some of Dalton's contemporaries as the word "atom" implies indivisibility, but he responded that if a carbon dioxide "atom" is divided, it ceases to be carbon dioxide. The carbon dioxide "atom" is indivisible in the sense that it cannot be divided into smaller carbon dioxide particles. Dalton made the following assumptions on how "elementary atoms" combined to form "compound atoms" (what we today refer to as molecules). When two elements can only form one compound, he assumed it was one atom of each, which he called a "binary compound". If two elements can form two compounds, the first compound is a binary compound and the second is a "ternary compound" consisting of one atom of the first element and two of the second. If two elements can form three compounds between them, then the third compound is a "quaternary" compound containing one atom of the first element and three of the second. Dalton thought that water was a "binary compound", i.e. one hydrogen atom and one oxygen atom. Dalton did not know that in their natural gaseous state, the ultimate particles of oxygen, nitrogen, and hydrogen exist in pairs (O2, N2, and H2). Nor was he aware of valencies. These properties of atoms were discovered later in the 19th century. Because atoms were too small to be directly weighed using the methods of the 19th century, Dalton instead expressed the weights of the myriad
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
atoms as multiples of the hydrogen atom's weight, which Dalton knew was the lightest element. By his measurements, 7 grams of oxygen will combine with 1 gram of hydrogen to make 8 grams of water with nothing left over, and assuming a water molecule to be one oxygen atom and one hydrogen atom, he concluded that oxygen's atomic weight is 7. In reality it is 16. Aside from the crudity of early 19th century measurement tools, the main reason for this error was that Dalton didn't know that the water molecule in fact has two hydrogen atoms, not one. Had he known, he would have doubled his estimate to a more accurate 14. This error was corrected in 1811 by Amedeo Avogadro. Avogadro proposed that equal volumes of any two gases, at equal temperature and pressure, contain equal numbers of molecules (in other words, the mass of a gas's particles does not affect the volume that it occupies). Avogadro's hypothesis, now usually called Avogadro's law, provided a method for deducing the relative weights of the molecules of gaseous elements, for if the hypothesis is correct relative gas densities directly indicate the relative weights of the particles that compose the gases. This way of thinking led directly to a second hypothesis: the particles of certain elemental gases were pairs of atoms, and when reacting chemically these molecules often split in two. For instance, the fact that two liters of hydrogen will react with just one liter of oxygen to produce two liters of water vapor (at constant pressure and temperature) suggested that a single oxygen molecule splits in two in order to form two molecules of water. The formula of water is H2O, not HO. Avogadro measured oxygen's atomic weight to be 15.074. == Opposition to atomic theory == Dalton's atomic
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
theory attracted widespread interest but not everyone accepted it at first. The law of multiple proportions was shown not to be a universal law when it came to organic substances, whose molecules can be quite large. For instance, in oleic acid there is 34 g of hydrogen for every 216 g of carbon, and in methane there is 72 g of hydrogen for every 216 g of carbon. 34 and 72 form a ratio of 17:36, which is not a ratio of small whole numbers. We know now that carbon-based substances can have very large molecules, larger than any the other elements can form. Oleic acid's formula is C18H34O2 and methane's is CH4. The law of multiple proportions by itself was not complete proof, and atomic theory was not universally accepted until the end of the 19th century. One problem was the lack of uniform nomenclature. The word "atom" implied indivisibility, but Dalton defined an atom as being the ultimate particle of any chemical substance, not just the elements or even matter per se. This meant that "compound atoms" such as carbon dioxide could be divided, as opposed to "elementary atoms". Dalton disliked the word "molecule", regarding it as "diminutive". Amedeo Avogadro did the opposite: he exclusively used the word "molecule" in his writings, eschewing the word "atom", instead using the term "elementary molecule". Jöns Jacob Berzelius used the term "organic atoms" to refer to particles containing three or more elements, because he thought this only existed in organic compounds. Jean-Baptiste Dumas used the terms "physical atoms" and "chemical atoms"; a "physical atom" was a particle that cannot be divided by physical means such as temperature and pressure, and a "chemical atom" was a particle that could not be divided by chemical reactions. The modern definitions of atom and molecule—an
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
atom being the basic particle of an element, and a molecule being an agglomeration of atoms—were established in the late half of the 19th century. A key event was the Karlsruhe Congress in Germany in 1860. As the first international congress of chemists, its goal was to establish some standards in the community. A major proponent of the modern distinction between atoms and molecules was Stanislao Cannizzaro. The various quantities of a particular element involved in the constitution of different molecules are integral multiples of a fundamental quantity that always manifests itself as an indivisible entity and which must properly be named atom. Cannizzaro criticized past chemists such as Berzelius for not accepting that the particles of certain gaseous elements are actually pairs of atoms, which led to mistakes in their formulation of certain compounds. Berzelius believed that hydrogen gas and chlorine gas particles are solitary atoms. But he observed that when one liter of hydrogen reacts with one liter of chlorine, they form two liters of hydrogen chloride instead of one. Berzelius decided that Avogadro's law does not apply to compounds. Cannizzaro preached that if scientists just accepted the existence of single-element molecules, such discrepancies in their findings would be easily resolved. But Berzelius did not even have a word for that. Berzelius used the term "elementary atom" for a gas particle which contained just one element and "compound atom" for particles which contained two or more elements, but there was nothing to distinguish H2 from H since Berzelius did not believe in H2. So Cannizzaro called for a redefinition so that scientists could understand that a hydrogen molecule can split into two hydrogen atoms in the course of a chemical reaction. A second objection to atomic theory was philosophical. Scientists in the 19th century had no way of
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
directly observing atoms. They inferred the existence of atoms through indirect observations, such as Dalton's law of multiple proportions. Some scientists adopted positions aligned with the philosophy of positivism, arguing that scientists should not attempt to deduce the deeper reality of the universe, but only systemize what patterns they could directly observe.: 232 This generation of anti-atomists can be grouped in two camps. The "equivalentists", like Marcellin Berthelot, believed the theory of equivalent weights was adequate for scientific purposes. This generalization of Proust's law of definite proportions summarized observations. For example, 1 gram of hydrogen will combine with 8 grams of oxygen to form 9 grams of water, therefore the "equivalent weight" of oxygen is 8 grams. The "energeticist", like Ernst Mach and Wilhelm Ostwald, were philosophically opposed to hypothesis about reality altogether. In their view, only energy as part of thermodynamics should be the basis of physical models.: 237 These positions were eventually quashed by two important advancements that happened later in the 19th century: the development of the periodic table and the discovery that molecules have an internal architecture that determines their properties. == Isomerism == Scientists discovered some substances have the exact same chemical content but different properties. For instance, in 1827, Friedrich Wöhler discovered that silver fulminate and silver cyanate are both 107 parts silver, 12 parts carbon, 14 parts nitrogen, and 16 parts oxygen (we now know their formulas as both AgCNO). In 1830 Jöns Jacob Berzelius introduced the term isomerism to describe the phenomenon. In 1860, Louis Pasteur hypothesized that the molecules of isomers might have the same set of atoms but in different arrangements. In 1874, Jacobus Henricus van 't Hoff proposed that the carbon atom bonds to other atoms in a tetrahedral arrangement. Working from this, he explained the structures of organic
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
molecules in such a way that he could predict how many isomers a compound could have. Consider, for example, pentane (C5H12). In van 't Hoff's way of modelling molecules, there are three possible configurations for pentane, and scientists did go on to discover three and only three isomers of pentane. Isomerism was not something that could be fully explained by alternative theories to atomic theory, such as radical theory and the theory of types. == Mendeleev's periodic table == Dmitrii Mendeleev noticed that when he arranged the elements in a row according to their atomic weights, there was a certain periodicity to them.: 117 For instance, the second element, lithium, had similar properties to the ninth element, sodium, and the sixteenth element, potassium — a period of seven. Likewise, beryllium, magnesium, and calcium were similar and all were seven places apart from each other on Mendeleev's table. Using these patterns, Mendeleev predicted the existence and properties of new elements, which were later discovered in nature: scandium, gallium, and germanium.: 118 Moreover, the periodic table could predict how many atoms of other elements that an atom could bond with — e.g., germanium and carbon are in the same group on the table and their atoms both combine with two oxygen atoms each (GeO2 and CO2). Mendeleev found these patterns validated atomic theory because it showed that the elements could be categorized by their atomic weight. Inserting a new element into the middle of a period would break the parallel between that period and the next, and would also violate Dalton's law of multiple proportions. The elements on the periodic table were originally arranged in order of increasing atomic weight. However, in a number of places chemists chose to swap the positions of certain adjacent elements so that they appeared in a
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
group with other elements with similar properties. For instance, tellurium is placed before iodine even though tellurium is heavier (127.6 vs 126.9) so that iodine can be in the same column as the other halogens. The modern periodic table is based on atomic number, which is equivalent to the nuclear charge, a change had to wait for the discovery of the nucleus.: 228 In addition, an entire row of the table was not shown because the noble gases had not been discovered when Mendeleev devised his table.: 222 == Statistical mechanics == In 1738, Swiss physicist and mathematician Daniel Bernoulli postulated that the pressure of gases and heat were both caused by the underlying motion of particles. Using his model he could predict the ideal gas law at constant temperature and suggested that the temperature was proportional to the velocity of the particles. These results were largely ignored for a century.: 25 James Clerk Maxwell, a vocal proponent of atomism, revived the kinetic theory in 1860 and 1867. His key insight was that the velocity of particles in a gas would vary around an average value, introducing the concept of a distribution function.: 26 Ludwig Boltzmann and Rudolf Clausius expanded his work on gases and the laws of thermodynamics especially the second law relating to entropy. In the 1870s, Josiah Willard Gibbs extended the laws of entropy and thermodynamics and coined the term "statistical mechanics." Boltzmann defended the atomistic hypothesis against major detractors from the time like Ernst Mach or energeticists like Wilhelm Ostwald, who considered that energy was the elementary quantity of reality. At the beginning of the 20th century, Albert Einstein independently reinvented Gibbs' laws, because they had only been printed in an obscure American journal. Einstein later commented that had he known of Gibbs' work, he would
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
"not have published those papers at all, but confined myself to the treatment of some few points [that were distinct]." All of statistical mechanics and the laws of heat, gas, and entropy took the existence of atoms as a necessary postulate. === Brownian motion === In 1827, the British botanist Robert Brown observed that dust particles inside pollen grains floating in water constantly jiggled about for no apparent reason. In 1905, Einstein theorized that this Brownian motion was caused by the water molecules continuously knocking the grains about, and developed a mathematical model to describe it. This model was validated experimentally in 1908 by French physicist Jean Perrin, who used Einstein's equations to measure the size of atoms. == Discovery of the electron == Atoms were thought to be the smallest possible division of matter until 1899 when J. J. Thomson discovered the electron through his work on cathode rays.: 86 : 364 A Crookes tube is a sealed glass container in which two electrodes are separated by a vacuum. When a voltage is applied across the electrodes, cathode rays are generated, creating a glowing patch where they strike the glass at the opposite end of the tube. Through experimentation, Thomson discovered that the rays could be deflected by electric fields and magnetic fields, which meant that these rays were not a form of light but were composed of very light charged particles, and their charge was negative. Thomson called these particles "corpuscles". He measured their mass-to-charge ratio to be several orders of magnitude smaller than that of the hydrogen atom, the smallest atom. This ratio was the same regardless of what the electrodes were made of and what the trace gas in the tube was. In contrast to those corpuscles, positive ions created by electrolysis or X-ray radiation had
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
mass-to-charge ratios that varied depending on the material of the electrodes and the type of gas in the reaction chamber, indicating they were different kinds of particles.: 363 In 1898, Thomson measured the charge on ions to be roughly 6 × 10−10 electrostatic units (2 × 10−19 Coulombs).: 85 In 1899, he showed that negative electricity created by ultraviolet light landing on a metal (known now as the photoelectric effect) has the same mass-to-charge ratio as cathode rays; then he applied his previous method for determining the charge on ions to the negative electric particles created by ultraviolet light.: 86 By this combination he showed that electron's mass was 0.0014 times that of hydrogen ions. These "corpuscles" were so light yet carried so much charge that Thomson concluded they must be the basic particles of electricity, and for that reason other scientists decided that these "corpuscles" should instead be called electrons following an 1894 suggestion by George Johnstone Stoney for naming the basic unit of electrical charge. In 1904, Thomson published a paper describing a new model of the atom. Electrons reside within atoms, and they transplant themselves from one atom to the next in a chain in the action of an electrical current. When electrons do not flow, their negative charge logically must be balanced out by some source of positive charge within the atom so as to render the atom electrically neutral. Having no clue as to the source of this positive charge, Thomson tentatively proposed that the positive charge was everywhere in the atom, the atom being shaped like a sphere—this was the mathematically simplest model to fit the available evidence (or lack of it). The balance of electrostatic forces would distribute the electrons throughout this sphere in a more or less even manner. Thomson further explained
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
that ions are atoms that have a surplus or shortage of electrons. Thomson's model is popularly known as the plum pudding model, based on the idea that the electrons are distributed throughout the sphere of positive charge with the same density as raisins in a plum pudding. Neither Thomson nor his colleagues ever used this analogy. It seems to have been a conceit of popular science writers. The analogy suggests that the positive sphere is like a solid, but Thomson likened it to a liquid, as he proposed that the electrons moved around in it in patterns governed by the electrostatic forces. Thus the positive electrification in Thomson's model was a temporary concept. Thomson's model was incomplete, it could not predict any of the known properties of the atom such as emission spectra or valencies. In 1906, Robert A. Millikan and Harvey Fletcher performed the oil drop experiment in which they measured the charge of an electron to be about -1.6 × 10−19, a value now defined as -1 e. Since the hydrogen ion and the electron were known to be indivisible and a hydrogen atom is neutral in charge, it followed that the positive charge in hydrogen was equal to this value, i.e. 1 e. == Discovery of the nucleus == Thomson's plum pudding model was challenged in 1911 by one of his former students, Ernest Rutherford, who presented a new model to explain new experimental data. The new model proposed a concentrated center of charge and mass that was later dubbed the atomic nucleus.: 296 Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles usually have much more momentum than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully. Between 1908 and 1913, Rutherford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They spotted alpha particles being deflected by angles greater than 90°. According to Thomson's model, all of the alpha particles should have passed through with negligible deflection. Rutherford deduced that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. This nucleus also carries most of the atom's mass. Only such an intense concentration of charge, anchored by its high mass, could produce an electric field strong enough to deflect the alpha particles as observed. Rutherford's model, being supported primarily by scattering data unfamiliar to many scientists, did not catch on until Niels Bohr joined Rutherford's lab and developed a new model for the electrons.: 304 Rutherford model predicted that the scattering of alpha particles would be proportional to the square of the atomic charge. Geiger and Marsden's based their analysis on setting the charge to half of the atomic weight of the foil's material (gold, aluminium,
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
etc.). Amateur physicist Antonius van den Broek noted that there was a more precise relation between the charge and the element's numeric sequence in the order of atomic weights. The sequence number came be called the atomic number and it replaced atomic weight in organizing the periodic table. == Bohr model == Rutherford deduced the existence of the atomic nucleus through his experiments but he had nothing to say about how the electrons were arranged around it. In 1912, Niels Bohr joined Rutherford's lab and began his work on a quantum model of the atom.: 19 Max Planck in 1900 and Albert Einstein in 1905 had postulated that light energy is emitted or absorbed in discrete amounts known as quanta (singular, quantum). This led to a series of atomic models with some quantum aspects, such as that of Arthur Erich Haas in 1910: 197 and the 1912 John William Nicholson atomic model with quantized angular momentum as h/2π. The dynamical structure of these models was still classical, but in 1913, Bohr abandon the classical approach. He started his Bohr model of the atom with a quantum hypothesis: an electron could only orbit the nucleus in particular circular orbits with fixed angular momentum and energy, its distance from the nucleus (i.e., their radii) being proportional to its energy.: 197 Under this model an electron could not lose energy in a continuous manner; instead, it could only make instantaneous "quantum leaps" between the fixed energy levels. When this occurred, light was emitted or absorbed at a frequency proportional to the change in energy (hence the absorption and emission of light in discrete spectra). In a trilogy of papers Bohr described and applied his model to derive the Balmer series of lines in the atomic spectrum of hydrogen and the related spectrum of
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
He+.: 197 He also used he model to describe the structure of the periodic table and aspects of chemical bonding. Together these results lead to Bohr's model being widely accepted by the end of 1915.: 91 Bohr's model was not perfect. It could only predict the spectral lines of hydrogen, not those of multielectron atoms. Worse still, it could not even account for all features of the hydrogen spectrum: as spectrographic technology improved, it was discovered that applying a magnetic field caused spectral lines to multiply in a way that Bohr's model couldn't explain. In 1916, Arnold Sommerfeld added elliptical orbits to the Bohr model to explain the extra emission lines, but this made the model very difficult to use, and it still couldn't explain more complex atoms. == Discovery of isotopes == While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one variety of some elements. The term isotope was coined by Margaret Todd as a suitable name for these varieties. That same year, J. J. Thomson conducted an experiment in which he channeled a stream of neon ions through magnetic and electric fields, striking a photographic plate at the other end. He observed two glowing patches on the plate, which suggested two different deflection trajectories. Thomson concluded this was because some of the neon ions had a different mass. The nature of this differing mass would later be explained by the discovery of neutrons in 1932: all atoms of the same element contain the same number of protons, while different isotopes have different numbers of neutrons. == Discovery of the proton == Back in 1815, William Prout observed that the atomic weights of the known elements were multiples of hydrogen's atomic weight, so he hypothesized that
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
all atoms are agglomerations of hydrogen, a particle which he dubbed "the protyle". Prout's hypothesis was put into doubt when some elements were found to deviate from this pattern—e.g. chlorine atoms on average weigh 35.45 daltons—but when isotopes were discovered in 1913, Prout's observation gained renewed attention. In 1898, J. J. Thomson found that the positive charge of a hydrogen ion was equal to the negative charge of a single electron. In an April 1911 paper concerning his studies on alpha particle scattering, Ernest Rutherford estimated that the charge of an atomic nucleus, expressed as a multiplier of hydrogen's nuclear charge (qe), is roughly half the atom's atomic weight. In June 1911, Van den Broek noted that on the periodic table, each successive chemical element increased in atomic weight on average by 2, which in turn suggested that each successive element's nuclear charge increased by 1 qe. In 1913, van den Broek further proposed that the electric charge of an atom's nucleus, expressed as a multiplier of the elementary charge, is equal to the element's sequential position on the periodic table. Rutherford defined this position as being the element's atomic number. In 1913, Henry Moseley measured the X-ray emissions of all the elements on the periodic table and found that the frequency of the X-ray emissions was a mathematical function of the element's atomic number and the charge of a hydrogen nucleus (see Moseley's law). In 1917 Rutherford bombarded nitrogen gas with alpha particles and observed hydrogen ions being emitted from the gas. Rutherford concluded that the alpha particles struck the nuclei of the nitrogen atoms, causing hydrogen ions to split off. These observations led Rutherford to conclude that the hydrogen nucleus was a singular particle with a positive charge equal to that of the electron's negative charge. The name
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
"proton" was suggested by Rutherford at an informal meeting of fellow physicists in Cardiff in 1920. The charge number of an atomic nucleus was found to be equal to the element's ordinal position on the periodic table. The nuclear charge number thus provided a simple and clear-cut way of distinguishing the chemical elements from each other, as opposed to Lavoisier's classic definition of a chemical element being a substance that cannot be broken down into simpler substances by chemical reactions. The charge number or proton number was thereafter referred to as the atomic number of the element. In 1923, the International Committee on Chemical Elements officially declared the atomic number to be the distinguishing quality of a chemical element. During the 1920s, some writers defined the atomic number as being the number of "excess protons" in a nucleus. Before the discovery of the neutron, scientists believed that the atomic nucleus contained a number of "nuclear electrons" which cancelled out the positive charge of some of its protons. This explained why the atomic weights of most atoms were higher than their atomic numbers. Helium, for instance, was thought to have four protons and two nuclear electrons in the nucleus, leaving two excess protons and a net nuclear charge of 2+. After the neutron was discovered, scientists realized the helium nucleus in fact contained two protons and two neutrons. == Discovery of the neutron == Physicists in the 1920s believed that the atomic nucleus contained protons plus a number of "nuclear electrons" that reduced the overall charge. These "nuclear electrons" were distinct from the electrons that orbited the nucleus. This incorrect hypothesis would have explained why the atomic numbers of the elements were less than their atomic weights, and why radioactive elements emit electrons (beta radiation) in the process of nuclear decay.
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
Rutherford even hypothesized that a proton and an electron could bind tightly together into a "neutral doublet". Rutherford wrote that the existence of such "neutral doublets" moving freely through space would provide a more plausible explanation for how the heavier elements could have formed in the genesis of the Universe, given that it is hard for a lone proton to fuse with a large atomic nucleus because of the repulsive electric field. In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick called this new particle "the neutron" and believed that it to be a proton and electron fused together because the neutron had about the same mass as a proton and an electron's mass is negligible by comparison. Neutrons are not in fact a fusion of a proton and an electron. == Modern quantum mechanical models == In 1924, Louis de Broglie proposed that all particles—particularly subatomic particles such as electrons—have an associated wave. Erwin Schrödinger, fascinated by this idea, developed an equation
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
that describes an electron as a wave function instead of a point. This approach predicted many of the spectral phenomena that Bohr's model failed to explain, but it was difficult to visualize, and faced opposition. One of its critics, Max Born, proposed instead that Schrödinger's wave function did not describe the physical extent of an electron (like a charge distribution in classical electromagnetism), but rather gave the probability that an electron would, when measured, be found at a particular point. This reconciled the ideas of wave-like and particle-like electrons: the behavior of an electron, or of any other subatomic entity, has both wave-like and particle-like aspects, and whether one aspect or the other is observed depend upon the experiment. A consequence of describing particles as waveforms rather than points is that it is mathematically impossible to calculate with precision both the position and momentum of a particle at a given point in time. This became known as the uncertainty principle, a concept first introduced by Werner Heisenberg in 1927. Schrödinger's wave model for hydrogen replaced Bohr's model, with its neat, clearly defined circular orbits. The modern model of the atom describes the positions of electrons in an atom in terms of probabilities. An electron can potentially be found at any distance from the nucleus, but, depending on its energy level and angular momentum, exists more frequently in certain regions around the nucleus than others; this pattern is referred to as its atomic orbital. The orbitals come in a variety of shapes—sphere, dumbbell, torus, etc.—with the nucleus in the middle. The shapes of atomic orbitals are found by solving the Schrödinger equation. Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the hydrogen atom and the hydrogen molecular ion. Beginning with the helium atom—which
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
contains just two electrons—numerical methods are used to solve the Schrödinger equation. Qualitatively the shape of the atomic orbitals of multi-electron atoms resemble the states of the hydrogen atom. The Pauli principle requires the distribution of these electrons within the atomic orbitals such that no more than two electrons are assigned to any one orbital; this requirement profoundly affects the atomic properties and ultimately the bonding of atoms into molecules.: 182 == See also == == Footnotes == == Bibliography == Feynman, R.P.; Leighton, R.B.; Sands, M. (1963). The Feynman Lectures on Physics. Vol. 1. ISBN 978-0-201-02116-5. {{cite book}}: ISBN / Date incompatibility (help) Andrew G. van Melsen (1960) [First published 1952]. From Atomos to Atom: The History of the Concept Atom. Translated by Henry J. Koren. Dover Publications. ISBN 0-486-49584-1. {{cite book}}: ISBN / Date incompatibility (help) J. P. Millington (1906). John Dalton. J. M. Dent & Co. (London); E. P. Dutton & Co. (New York). Jaume Navarro (2012). A History of the Electron: J. J. and G. P. Thomson. Cambridge University Press. ISBN 978-1-107-00522-8. Trusted, Jennifer (1999). The Mystery of Matter. MacMillan. ISBN 0-333-76002-6. Bernard Pullman (1998). The Atom in the History of Human Thought. Translated by Axel Reisinger. Oxford University Press. ISBN 0-19-511447-7. Jean Perrin (1910) [1909]. Brownian Movement and Molecular Reality. Translated by F. Soddy. Taylor and Francis. Ida Freund (1904). The Study of Chemical Composition. Cambridge University Press. Thomas Thomson (1807). A System of Chemistry: In Five Volumes, Volume 3. John Brown. Thomas Thomson (1831). The History of Chemistry, Volume 2. H. Colburn, and R. Bentley. John Dalton (1808). A New System of Chemical Philosophy vol. 1. John Dalton (1817). A New System of Chemical Philosophy vol. 2. Stanislao Cannizzaro (1858). Sketch of a Course of Chemical Philosophy. The Alembic Club. == Further reading ==
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
Charles Adolphe Wurtz (1881) The Atomic Theory, D. Appleton and Company, New York. Alan J. Rocke (1984) Chemical Atomism in the Nineteenth Century: From Dalton to Cannizzaro, Ohio State University Press, Columbus (open access full text at http://digital.case.edu/islandora/object/ksl%3Ax633gj985). == External links == Atomism by S. Mark Cohen. Atomic Theory – detailed information on atomic theory with respect to electrons and electricity. The Feynman Lectures on Physics Vol. I Ch. 1: Atoms in Motion
{ "page_id": 2844, "source": null, "title": "History of atomic theory" }
The Aberdeen Bestiary (Aberdeen University Library, Univ Lib. MS 24) is a 12th-century English illuminated manuscript bestiary that was first listed in 1542 in the inventory of the Old Royal Library at the Palace of Westminster. Due to similarities, it is often considered to be the "sister" manuscript of the Ashmole Bestiary. The connection between the ancient Greek didactic text Physiologus and similar bestiary manuscripts is also often noted. Information about the manuscript's origins and patrons are circumstantial, although the manuscript most likely originated from the 13th century and was owned by a wealthy ecclesiastical patron from northern or southern England. Currently, the Aberdeen Bestiary resides in the Aberdeen University Library in Scotland. == History == The Aberdeen Bestiary and the Ashmole Bestiary are considered by Xenia Muratova, a professor of Art History, to be "the work of different artists belonging to the same artistic milieu." Due to their "striking similarities" they are often compared and described by scholars as being "sister manuscripts." The medievalist scholar M. R. James considered the Aberdeen Bestiary ''a replica of Ashmole 1511" a view echoed by many other art historians. === Provenance === The original patron of both the Aberdeen and Ashmole Bestiary was considered to be a high-ranking member of society such as a prince, king or another high ranking church official or monastery. However, since the section related to monastery life that was commonly depicted within the Aviarium manuscript was missing the original patron remains uncertain but it appears less likely to be a church member. The Aberdeen Bestiary was kept in Church and monastic settings for a majority of its history. However at some point it entered into the English royal collections library. The royal Westminster Library shelf stamp of Henry VIII of England is stamped on the side of the
{ "page_id": 2853, "source": null, "title": "Aberdeen Bestiary" }
bestiary. How King Henry acquired the manuscript remains unknown although it was probably taken from a monastery. The manuscript appears to have been well-read by the family based on the amount of reading wear on the edges of the pages. Around the time King James of Scotland became the King of England the bestiary was passed along to Marischal College in Aberdeen, Scotland. The manuscript is in fragmented condition as many illuminations on folios were removed individually as miniatures likely not for monetary but possibly for personal reasons. The manuscript currently is in the Aberdeen Library in Scotland where it has remained since 1542. == Description == === Materials === The Aberdeen bestiary is a gilded, decorated manuscript featuring large miniatures and some of the finest pigment, parchment and gold leaf from its time. Some portions of the manuscript such as folio eight recto even feature tarnished silver leaf. The original patron was wealthy enough to afford such materials so that the artists and scribes could enjoy creative freedom while creating the manuscripts. The artists were professionally trained and experimented with new techniques - such as heavy washes mixed with light washes and dark thick lines and use of contrasting color. The aqua color that is in the Aberdeen Bestiary is not present in the Ashmole Bestiary. The Aberdeen manuscript is loaded with filigree flora design and champie style gold leaf initials. Canterbury is considered to be the original location of manufacture as the location was well known for manufacturing high-end luxury books during the thirteen century. Its similarities with the Canterbury Paris Psalter tree style also further draws evidence of this relation. === Style === The craftsmanship of both Ashmole and Aberdeen bestiary suggest similar artists and scribes. Both the Ashmole and Aberdeen bestiary were probably made within 10
{ "page_id": 2853, "source": null, "title": "Aberdeen Bestiary" }
years of each other due to their stylistic and material similarities and the fact that both are crafted with the finest materials of their time. Stylistically both manuscripts are very similar but the Aberdeen has figures that are both more voluminous and less energetic than those of the Ashmole Bestiary. The color usage has been suggested as potentially Biblical in meaning as color usage had different interpretations in the early 13th century. The overall style of the human figures as well as color usage is very reminiscent of Roman mosaic art especially with the attention to detail in the drapery. Circles and ovals semi-realistically depict highlights throughout the manuscript. The way that animals are shaded in a Romanesque fashion with the use of bands to depict volume and form, which is similar to an earlier 12th-century Bury Bible made at Bury St.Edmunds. This Bestiary also shows stylistic similarities with the Paris Psalters of Canterbury. The Aviary section is similar to the Aviariium which is a well-known 12th century monastic text. The deviation from traditional color usage can be seen in the tiger, satyr, and unicorn folios as well as many other folios. The satyr in the Aberdeen Bestiary when compared to the satyr section of the slightly older Worksop bestiary is almost identical. There are small color notes in the Aberdeen Bestiary that are often seen in similar manuscripts dating between 1175 and 1250 which help indicate that it was made near the year 1200 or 1210. These notes are similar to many other side notes written on the sides of pages throughout the manuscript and were probably by the painter to remind himself of special circumstances, these note occur irregularly throughout the text. === Illuminations === Folio page 1 to 3 recto depicts the Genesis 1:1-25 which is represented
{ "page_id": 2853, "source": null, "title": "Aberdeen Bestiary" }
with a large full page illumination Biblical Creation scene in the manuscript. Folio 5 recto shows Adam, a large figure surrounded by gold leaf and towering over others, with the theme of 'Adam naming the animals' - this starts the compilation of the bestiary portion within the manuscript. Folio 5 verso depicts quadrupeds, livestock, wild beasts, and the concept of the herd. Folio 7 to 18 recto depicts large cats and other beasts such as wolves, foxes and dogs. Many pages from the start of the manuscript's bestiary section such as 11 verso featuring a hyena shows small pin holes which were likely used to map out and copy artwork to a new manuscript. Folio 20 verso to 28 recto depicts livestock such as sheep, horses, and goats. Small animals like cats and mice are depicted on folio 24 to 25. Pages 25 recto to 63 recto feature depictions of birds and folio 64 recto to 80 recto depicts reptiles, worms and fish. 77 recto to 91 verso depicts trees and plants and other elements of nature such as the nature of man. The end folios of the manuscript from 93 recto to 100 recto depicts the nature of stones and rocks. Seventeen of the Aberdeen manuscript pages are pricked for transfer in a process called pouncing such as clearly seen in the hyena folio as well as folio 3 recto and 3 verso depicting Genesis 1:26-1:28, 31, 1:1-2. The pricking must have been done shortly after the creation of the Adam and Eve folio pages since there is not damage done to nearby pages. Other pages used for pouncing include folio 7 recto to 18 verso which is the beginning of the beasts portion of the manuscript and likely depicted a lions as well as other big cats such
{ "page_id": 2853, "source": null, "title": "Aberdeen Bestiary" }
as leopards, panthers and their characteristic as well as other large wild and domesticated beasts. === Missing Folios === On folio 6 recto there was likely intended to be a depiction of a lion as in the Ashmole bestiary, but in this instance the pages were left blank although there are markings of margin lines. In comparison to the Ashmole bestiary, on 9 verso some leaves are missing which should have likely contained imagery of the antelope (Antalops), unicorn (Unicornis), lynx (Lynx), griffin (Gryps), part of elephant (Elephans). Near folio 21 verso two illuminations of the ox (Bos), camel (Camelus), dromedary (Dromedarius), ass (Asinus), onager (Onager) and part of horse (Equus) are also assumed to be missing. Also missing from folio 15 recto on are some leaves which should have contained crocodile (Crocodilus), manticore (Mantichora) and part of parandrus (Parandrus). These missing folios are assumed from comparisons between the Ashmole and other related bestiaries. == Contents == Folio 1 recto : Genesis creation narrative of heaven and earth (Genesis, 1: 1–5). (Full page) Folio 1 verso: Creation of the waters and the firmament (Genesis, 1: 6–8) Folio 2 recto : Creation of the birds and fish (Genesis, 1: 20–23) Folio 2 verso : Creation of the animals (Genesis, 1: 24–25) Folio 3 recto : Creation of man (Genesis, 1: 26–28, 31; 2: 1–2) Folio 5 recto : Adam names the animals (Isidore of Seville, Etymologiae, Book XII, i, 1–2) Folio 5 verso : Animal (Animal) (Isidore of Seville, Etymologiae, Book XII, i, 3) Folio 5 verso : Quadruped (Quadrupes) (Isidore of Seville, Etymologiae, Book XII, i, 4) Folio 5 verso : Livestock (Pecus) (Isidore of Seville, Etymologiae, Book XII, i, 5–6) Folio 5 verso : Beast of burden (Iumentum) (Isidore of Seville, Etymologiae, Book XII, i, 7) Folio 5 verso
{ "page_id": 2853, "source": null, "title": "Aberdeen Bestiary" }
: Herd (Armentum) (Isidore of Seville, Etymologiae, Book XII, i, 8) === Beasts (Bestiae) === Folio 7 recto : Lion (Leo) (Physiologus, Chapter 1; Isidore of Seville, Etymologiae, Book XII, ii, 3–6) Folio 8 recto : Tiger (Tigris) (Isidore of Seville, Etymologiae, Book XII, ii, 7) Folio 8 verso : Pard (Pard) (Isidore of Seville, Etymologiae, Book XII, ii, 10–11) Folio 9 recto : Panther (Panther) (Physiologus, Chapter 16; Isidore of Seville, Etymologiae, Book XII, ii, 8–9) Folio 10 recto : Elephant (Elephans) (Isidore of Seville, Etymologiae, Book XII, ii, 14; Physiologus, Chapter 43; Ambrose, Hexaemeron, Book VI, 35; Solinus, Collectanea rerum memorabilium, xxv, 1–7) Folio 11 recto : Beaver (Castor) Folio 11 recto : Ibex (Ibex) (Hugh of Fouilloy, II, 15) Folio 11 verso : Hyena (Yena) (Physiologus, Chapter 24; Solinus, Collectanea rerum memorabilium, xxvii, 23–24) Folio 12 recto : Crocotta (Crocotta) (Solinus, Collectanea rerum memorabilium, xxvii, 26) Folio 12 recto : Bonnacon (Bonnacon) (Solinus, Collectanea rerum memorabilium, xl, 10–11) Folio 12 verso : Ape (Simia) Folio 13 recto : Satyr (Satyrs) Folio 13 recto : Deer (Cervus) Folio 14 recto : Goat (Caper) Folio 14 verso : Wild goat (Caprea) Folio 15 recto : Monoceros (Monoceros) (Solinus, Collectanea rerum memorabilium, lii, 39–40) Folio 15 recto : Bear (Ursus) Folio 15 verso : Leucrota (Leucrota) (Solinus, Collectanea rerum memorabilium, lii, 34) Folio 16 recto : Parandrus (Parandrus) (Solinus, Collectanea rerum memorabilium, xxx, 25) Folio 16 recto : Fox (Vulpes) Folio 16 verso : Yale (Eale) (Solinus, Collectanea rerum memorabilium, lii, 35) Folio 16 verso : Wolf (Lupus) Folio 18 recto : Dog (Canis) === Livestock (Pecora) === Folio 20 verso : Sheep (Ovis) (Isidore of Seville, Etymologiae, Book XII, i, 9; Ambrose, Hexaemeron, Book VI, 20) Folio 21 recto : Wether (Vervex) (Isidore of Seville, Etymologiae, Book XII, i,
{ "page_id": 2853, "source": null, "title": "Aberdeen Bestiary" }
10) Folio 21 recto : Ram (Aries) (Isidore of Seville, Etymologiae, Book XII, i, 11) Folio 21 recto : Lamb (Agnus) (Isidore of Seville, Etymologiae, Book XII, i, 12; Ambrose, Hexaemeron, Book VI, 28) Folio 21 recto : He-goat (Hircus) (Isidore of Seville, Etymologiae, Book XII, i, 14) Folio 21 verso : Kid (Hedus) (Isidore of Seville, Etymologiae, Book XII, i, 13) Folio 21 verso : Boar (Aper) (Isidore of Seville, Etymologiae, Book XII, i, 27) Folio 21 verso : Bullock (Iuvencus) (Isidore of Seville, Etymologiae, Book XII, i, 28) Folio 21 verso : Bull (Taurus) (Isidore of Seville, Etymologiae, Book XII, i, 29) Folio 22 recto : Horse (Equus) (Isidore of Seville, Etymologiae, Book XII, i, 41–56; Hugh of Fouilloy, III, xxiii) Folio 23 recto : Mule (Mulus) (Isidore of Seville, Etymologiae, Book XII, i, 57–60) === Small animals (Minuta animala) === Folio 23 verso : Cat (Musio) (Isidore of Seville, Etymologiae, Book XII, ii, 38) Folio 23 verso : Mouse (Mus) (Isidore of Seville, Etymologiae, Book XII, iii, 1) Folio 23 verso : Weasel (Mustela) (Isidore of Seville, Etymologiae, Book XII, iii, 2; Physiologus, Chapter 21) Folio 24 recto : Mole (Talpa) (Isidore of Seville, Etymologiae, Book XII, iii, 5) Folio 24 recto : Hedgehog (Ericius) (Isidore of Seville, Etymologiae, Book XII, iii, 7; Ambrose, Hexaemeron, VI, 20) Folio 24 verso : Ant (Formica) (Physiologus, 12; Ambrose, Hexaemeron, Book VI, 16, 20) === Birds (Aves) === Folio 25 recto : Bird (Avis) Folio 25 verso : Dove (Columba) Folio 26 recto : Dove and hawk (Columba et Accipiter) Folio 26 verso : Dove (Columba) Folio 29 verso : North wind and South wind (Aquilo et Auster ventus) Folio 30 recto : Hawk (Accipiter) Folio 31 recto : Turtle dove (Turtur) Folio 32 verso : Palm tree (Palma) Folio
{ "page_id": 2853, "source": null, "title": "Aberdeen Bestiary" }
33 verso : Cedar (Cedrus) Folio 34 verso : Pelican (Pellicanus) - Orange and blue Folio 35 verso : Night heron (Nicticorax) Folio 36 recto : Hoopoe (Epops) Folio 36 verso : Magpie (Pica) Folio 37 recto : Raven (Corvus) Folio 38 verso : Cock (Gallus) Folio 41 recto : Ostrich (Strutio) Folio 44 recto : Vulture (Vultur) Folio 45 verso : Crane (Grus) Folio 46 verso : Kite (Milvus) Folio 46 verso : Parrot (Psitacus) Folio 47 recto : Ibis (Ibis) Folio 47 verso : Swallow (Yrundo) Folio 48 verso : Stork (Ciconia) Folio 49 verso : Blackbird (Merula) Folio 50 recto : Eagle-owl (Bubo) Folio 50 verso : Hoopoe (Hupupa) Folio 51 recto : Little owl (Noctua) Folio 51 recto : Bat (Vespertilio) Folio 51 verso : Jay (Gragulus) Folio 52 verso : Nightingale (Lucinia) Folio 53 recto : Goose (Anser) Folio 53 verso : Heron (Ardea) Folio 54 recto : Partridge (Perdix) Folio 54 verso : Halcyon (Alcyon) Folio 55 recto : Coot (Fulica) Folio 55 recto : Phoenix (Fenix) Folio 56 verso : Caladrius (Caladrius) Folio 57 verso : Quail (Coturnix) Folio 58 recto : Crow (Cornix) Folio 58 verso : Swan (Cignus) Folio 59 recto : Duck (Anas) Folio 59 verso : Peacock (Pavo) Folio 61 recto : Eagle (Aquila) Folio 63 recto : Bee (Apis) === Snakes and Reptiles (Serpentes) === Folio 64 verso : Peridexion tree (Perindens) Folio 65 verso : Snake (Serpens) Folio 65 verso : Dragon (Draco) Folio 66 recto : Basilisk (Basiliscus) Folio 66 verso : Regulus (Regulus) Folio 66 verso : Viper (Vipera) Folio 67 verso : Asp (Aspis) Folio 68 verso : Scitalis (Scitalis) Folio 68 verso : Amphisbaena (Anphivena) Folio 68 verso : Hydrus (Ydrus) Folio 69 recto : Boa (Boa) Folio 69 recto : Iaculus (Iaculus)
{ "page_id": 2853, "source": null, "title": "Aberdeen Bestiary" }
Folio 69 verso : Siren (Siren) Folio 69 verso : Seps (Seps) Folio 69 verso : Dipsa (Dipsa) Folio 69 verso : Lizard (Lacertus) Folio 69 verso : Salamander (Salamandra) Folio 70 recto : Saura (Saura) Folio 70 verso : Newt (Stellio) Folio 71 recto : Of the nature of Snakes (De natura serpentium) === Worms (Vermes) === Folio 72 recto : Worms (Vermis) === Fish (Pisces) === Folio 72 verso : Fish (Piscis) Folio 73 recto : Whale (Balena) Folio 73 recto : Serra (Serra) Folio 73 recto : Dolphin (Delphinus) Folio 73 verso : Sea-pig (Porcus marinus) Folio 73 verso : Crocodile (Crocodrillus) Folio 73 verso : Mullet (Mullus) Folio 74 recto : Fish (Piscis) === Trees and Plants (Arbories) === Folio 77 verso : Tree (Arbor) Folio 78 verso : Fig (Ficus) Folio 79 recto : Again of trees (Item de arboribus) Folio 79 recto : Mulberry Folio 79 recto : Sycamore Folio 79 recto : Hazel Folio 79 recto : Nuts Folio 79 recto : Almond Folio 79 recto : Chestnut Folio 79 recto : Oak Folio 79 verso : Beech Folio 79 verso : Carob Folio 79 verso : Pistachio Folio 79 verso : Pitch pine Folio 79 verso : Pine Folio 79 verso : Fir Folio 79 verso : Cedar Folio 80 recto : Cypress Folio 80 recto : Juniper Folio 80 recto : Plane Folio 80 recto : Oak Folio 80 recto : Ash Folio 80 recto : Alder Folio 80 verso : Elm Folio 80 verso : Poplar Folio 80 verso : Willow Folio 80 verso : Osier Folio 80 verso : Box === Nature of Man (Natura hominis) === Folio 80 verso : Isidorus on the nature of man (Ysidorus de natura hominis) Folio 89 recto : Isidorus on the parts
{ "page_id": 2853, "source": null, "title": "Aberdeen Bestiary" }
of man's body (Ysidorus de membris hominis) Folio 91 recto : Of the age of man (De etate hominis) === Stones (Lapides) === Folio 93 verso : Fire-bearing stone (Lapis ignifer) Folio 94 verso : Adamas stone (Lapis adamas) Folio 96 recto : Myrmecoleon (Mermecoleon) Folio 96 verso : Verse (Versus) Folio 97 recto : Stone in the foundation of the wall (Lapis in fundamento muri) Folio 97 recto : The first stone, Jasper Folio 97 recto : The second stone, Sapphire Folio 97 recto : The third stone, Chalcedony Folio 97 verso : The fourth stone, Smaragdus Folio 98 recto : The fifth stone, Sardonyx Folio 98 recto : The sixth stone, Sard Folio 98 verso : The seventh stone, Chrysolite Folio 98 verso : The eighth stone, Beryl Folio 99 recto : The ninth stone, Topaz Folio 99 verso : The tenth stone, Chrysoprase Folio 99 verso : The eleventh stone, Hyacinth Folio 100 recto : The twelfth stone, Amethyst Folio 100 recto : Of stones and what they can do (De effectu lapidum) == Gallery == == See also == Bestiary List of medieval bestiaries Physiologus Ashmole Bestiary Paris Psalter Aviarium == References == == External links == The Aberdeen Bestiary Project - University of Aberdeen, Online version of the bestiary. David Badke, The Medieval Bestiary : Manuscript: Univ. Lib. MS 24 (Aberdeen Bestiary)
{ "page_id": 2853, "source": null, "title": "Aberdeen Bestiary" }
Well-known types of reactions that involve inorganic compounds include: Alkylation Alkyne trimerisation Alkyne metathesis Aminolysis Amination Arylation Barbier reaction Beta-hydride elimination Birch reduction Bönnemann cyclization Bromination Buchwald–Hartwig coupling Cadiot–Chodkiewicz coupling Calcination Carbometalation Carbothermal reduction Carbonation Carbonylation Castro–Stephens coupling Clemmensen reduction Chain walking Chan–Lam coupling Chlorination Comproportionation C–C coupling C–H activation Cyanation Cyclometalation Decarbonylation Decarboxylation Dehydration Dehalogenation Dehydrogenation Dehydrohalogenation Deprotonation Desilylation Dimerisation Disproportionation Dötz reaction Eder reaction Electromerism Electron transfer (inner sphere and outer sphere) Étard reaction Fenton oxidation Fischer–Tropsch process Fluorination Formylation Fowler process Fukuyama coupling Glaser coupling Gomberg–Bachmann reaction Haber–Weiss reaction Halcon process Halogenation Hay coupling Heck reaction Heck–Matsuda reaction Hiyama coupling Hofmann-Sand reaction Homolysis Huisgen cycloaddition Hydride reduction Hydroamination Hydration Hydroboration Hydrocarboxylation Hydrocyanation Hydrodesulfurization Hydroformylation Hydrogenation Hydrohalogenation Hydrolysis Hydrometalation Hydrosilylation Iodination Isomerisation Jones oxidation Kulinkovich reaction Kumada coupling Lemieux–Johnson oxidation Ley oxidation Linkage isomerization Luche reduction McMurry reaction Meerwein–Ponndorf–Verley reduction Mercuration Methylation Migratory insertion Negishi coupling Nicholas reaction Nitrosylation Noyori asymmetric hydrogenation Olefin polymerization Oppenauer oxidation Oxidation Oxidative addition Oxygenation Oxymercuration reaction Pauson–Khand reaction Photodissociation Pseudorotation Protonation Protonolysis Proton-coupled electron transfer Racemization Redox reactions (see list of oxidants and reductants) Reduction Reductive elimination Reppe synthesis Riley oxidation Salt metathesis Sarett oxidation Sharpless epoxidation Shell higher olefin process Silylation Simmons–Smith reaction Sonogashira coupling Staudinger reaction Stille reaction Sulfidation Suzuki reaction Transmetalation Ullmann reaction Upjohn dihydroxylation Wacker process Water gas shift reaction Water oxidation Wurtz coupling Ziegler-Natta polymerization == See also == List of organic reactions Named inorganic compounds List of inorganic compounds Inorganic compounds by element
{ "page_id": 32770854, "source": null, "title": "List of inorganic reactions" }
Autodisplay is a genetic engineering technique which is used to insert a protein of interest on the outer surface of gram-negative bacteria. This is accomplished by attaching the protein of interest to a protein which is known to localize to the surface of the bacterial outer membrane. First introduced in the 1990s, the technique is now widely used in research science and in biotechnology to manipulate bacteria for protein studies, drug discovery, and vaccine development. == Mechanism == Autodisplay is based on the mechanism of bacterial autotransporter proteins. These proteins have a signal peptide at the N-terminus which allow them to be translocated across the bacterial inner membrane and into the periplasm. In the periplasm a β-barrel domain at the protein's C-terminus inserts into the bacterial outer membrane, forming a channel through which the rest of the protein can pass. The rest of the protein threads through this channel across the outer membrane and to the surface of the bacteria. Once it reaches the surface, the protein may stay connected to the membrane-bound β-barrel or it may be cleaved from the membrane and secreted into the extracellular environment. There are several known Autotransporter pathways. Autodisplay uses this autotransporter system by inserting a protein of interest between the N-terminal signal peptide and the C-terminal β-barrel of an autotransporter. This allow the protein of interest to be carried to the bacterial surface by the regular autotransporter mechanism. == History == Autodisplay is based on the autotransporter proteins of gram-negative bacteria, which were first discovered in the late 1980s, when the IgA1 protease of Neisseria gonorrhoeae was described. By the early 1990s, several groups had attempted to attach heterologous proteins to IgA1 protease and express the product in Escherichia coli, however the N. gonorrhoeae IgA1 protease was not expressed well in E. coli,
{ "page_id": 39848744, "source": null, "title": "Autodisplay" }
limiting the usefulness of this system. Subsequently, the IgA1 protease was replaced by an autotransporter native to E. coli, namely the AIDA-1 protein from Enteropathogenic E. coli. This was expressed to much higher levels in E. coli than the previously used N. gonorrhoeae protein had been, allowing this system to be used for larger-scale biotechnological applications. Autodisplay was invented with goals such as whole cell catalyses, especially with substrates which cannot cross the membranes of bacteria, the expression of peptides/proteins without an attached purification and the expression of immobilized peptides/proteins. == See also == Arming yeast == References == == External links == Mechanism [1]
{ "page_id": 39848744, "source": null, "title": "Autodisplay" }
In pharmacology, Schild regression analysis, based upon the Schild equation, both named for Heinz Otto Schild, are tools for studying the effects of agonists and antagonists on the response caused by the receptor or on ligand-receptor binding. == Concept == Dose-response curves can be constructed to describe response or ligand-receptor complex formation as a function of the ligand concentration. Antagonists make it harder to form these complexes by inhibiting interactions of the ligand with its receptor. This is seen as a change in the dose response curve: typically a rightward shift or a lowered maximum. A reversible competitive antagonist should cause a rightward shift in the dose response curve, such that the new curve is parallel to the old one and the maximum is unchanged. This is because reversible competitive antagonists are surmountable antagonists. The magnitude of the rightward shift can be quantified with the dose ratio, r. The dose ratio r is the ratio of the dose of agonist required for half maximal response with the antagonist B {\displaystyle {\ce {B}}} present divided by the agonist required for half maximal response without antagonist ("control"). In other words, the ratio of the EC50s of the inhibited and un-inhibited curves. Thus, r represents both the strength of an antagonist and the concentration of the antagonist that was applied. An equation derived from the Gaddum equation can be used to relate r to [ B ] {\displaystyle [{\ce {B}}]} , as follows: r = 1 + [ B ] K B {\displaystyle r=1+{\frac {[{\ce {B}}]}{K_{B}}}} where r is the dose ratio [ B ] {\displaystyle [{\ce {B}}]} is the concentration of the antagonist K B {\displaystyle K_{B}} is the equilibrium constant of the binding of the antagonist to the receptor A Schild plot is a double logarithmic plot, typically log 10 ⁡
{ "page_id": 9374505, "source": null, "title": "Schild equation" }
( r − 1 ) {\displaystyle \log _{10}(r-1)} as the ordinate and log 10 ⁡ [ B ] {\displaystyle \log _{10}[{\ce {B}}]} as the abscissa. This is done by taking the base-10 logarithm of both sides of the previous equation after subtracting 1: log 10 ⁡ ( r − 1 ) = log 10 ⁡ [ B ] − log 10 ⁡ ( K B ) {\displaystyle \log _{10}(r-1)=\log _{10}[{\ce {B}}]-\log _{10}(K_{B})} This equation is linear with respect to log 10 ⁡ [ B ] {\displaystyle \log _{10}[{\ce {B}}]} , allowing for easy construction of graphs without computations. This was particular valuable before the use of computers in pharmacology became widespread. The y-intercept of the equation represents the negative logarithm of K B {\displaystyle K_{B}} and can be used to quantify the strength of the antagonist. These experiments must be carried out on a very wide range (therefore the logarithmic scale) as the mechanisms differ over a large scale, such as at high concentration of drug. The fitting of the Schild plot to observed data points can be done with regression analysis. == Schild regression for ligand binding == Although most experiments use cellular response as a measure of the effect, the effect is, in essence, a result of the binding kinetics; so, in order to illustrate the mechanism, ligand binding is used. A ligand A will bind to a receptor R according to an equilibrium constant : K d = k − 1 k 1 {\displaystyle K_{d}={\frac {k_{-1}}{k_{1}}}} Although the equilibrium constant is more meaningful, texts often mention its inverse, the affinity constant (Kaff = k1/k−1): A better binding means an increase of binding affinity. The equation for simple ligand binding to a single homogeneous receptor is [ A R ] = [ R ] t [ A ]
{ "page_id": 9374505, "source": null, "title": "Schild equation" }