text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
The World Uranium Hearing was held in Salzburg, Austria in September 1992. [ 1 ] Anti-nuclear speakers from all continents, including indigenous speakers and scientists, testified to the health and environmental problems of uranium mining and processing, nuclear power , nuclear weapons , nuclear tests , and radioactive waste disposal . [ 2 ]
People who spoke at the 1992 Hearing include: Thomas Banyacya , Katsumi Furitsu , Manuel Pino and Floyd Red Crow Westerman . They said they were deeply dismayed by the atomic bombings of Hiroshima and Nagasaki and highlighted what they called the inherently destructive nature of all phases of the nuclear supply chain. They recalled the disastrous impact of nuclear weapons testing in places such as the Nevada Test Site , Bikini Atoll and Eniwetok , Tahiti , Maralinga , and Central Asia . They highlighted the threat of radioactive contamination to all peoples, especially indigenous communities and said that their survival requires self-determination and emphasis on spiritual and cultural values. Increased renewable energy commercialization was advocated. [ 3 ]
The proceedings were published as a book, Poison fire, sacred earth testimonies, lectures, conclusions . [ 4 ] [ 5 ] The outcome document, the Declaration of Salzburg was accepted by the United Nations Working Group on Indigenous Populations . [ 6 ]
|
https://en.wikipedia.org/wiki/World_Uranium_Hearing
|
The world line (or worldline ) of an object is the path that an object traces in 4- dimensional spacetime . It is an important concept of modern physics , and particularly theoretical physics .
The concept of a "world line" is distinguished from concepts such as an " orbit " or a " trajectory " (e.g., a planet's orbit in space or the trajectory of a car on a road) by inclusion of the dimension time , and typically encompasses a large area of spacetime wherein paths which are straight perceptually are rendered as curves in spacetime to show their ( relatively ) more absolute position states —to reveal the nature of special relativity or gravitational interactions.
The idea of world lines was originated by physicists and was pioneered by Hermann Minkowski . The term is now used most often in the context of relativity theories (i.e., special relativity and general relativity ).
A world line of an object (generally approximated as a point in space, e.g., a particle or observer) is the sequence of spacetime events corresponding to the history of the object. A world line is a special type of curve in spacetime. Below an equivalent definition will be explained: A world line is either a time-like or a null curve in spacetime. Each point of a world line is an event that can be labeled with the time and the spatial position of the object at that time.
For example, the orbit of the Earth in space is approximately a circle, a three-dimensional (closed) curve in space: the Earth returns every year to the same point in space relative to the sun. However, it arrives there at a different (later) time. The world line of the Earth is therefore helical in spacetime (a curve in a four-dimensional space) and does not return to the same point.
Spacetime is the collection of events , together with a continuous and smooth coordinate system identifying the events. Each event can be labeled by four numbers: a time coordinate and three space coordinates; thus spacetime is a four-dimensional space. The mathematical term for spacetime is a four-dimensional manifold (a topological space that locally resembles Euclidean space near each point). The concept may be applied as well to a higher-dimensional space. For easy visualizations of four dimensions, two space coordinates are often suppressed. An event is then represented by a point in a Minkowski diagram , which is a plane usually plotted with the time coordinate, say t {\displaystyle t} , vertically, and the space coordinate, say x {\displaystyle x} , horizontally. As expressed by F.R. Harvey
A world line traces out the path of a single point in spacetime. A world sheet is the analogous two-dimensional surface traced out by a one-dimensional line (like a string) traveling through spacetime. The world sheet of an open string (with loose ends) is a strip; that of a closed string (a loop) resembles a tube.
Once the object is not approximated as a mere point but has extended volume, it traces not a world line but rather a world tube.
A one-dimensional line or curve can be represented by the coordinates as a function of one parameter. Each value of the parameter corresponds to a point in spacetime and varying the parameter traces out a line. So in mathematical terms a curve is defined by four coordinate functions x a ( τ ) , a = 0 , 1 , 2 , 3 {\displaystyle x^{a}(\tau ),\;a=0,1,2,3} (where x 0 {\displaystyle x^{0}} usually denotes the time coordinate) depending on one parameter τ {\displaystyle \tau } . A coordinate grid in spacetime is the set of curves one obtains if three out of four coordinate functions are set to a constant.
Sometimes, the term world line is used informally for any curve in spacetime. This terminology causes confusions. More properly, a world line is a curve in spacetime that traces out the (time) history of a particle, observer or small object. One usually uses the proper time of an object or an observer as the curve parameter τ {\displaystyle \tau } along the world line.
A curve that consists of a horizontal line segment (a line at constant coordinate time), may represent a rod in spacetime and would not be a world line in the proper sense. The parameter simply traces the length of the rod.
A line at constant space coordinate (a vertical line using the convention adopted above) may represent a particle at rest (or a stationary observer). A tilted line represents a particle with a constant coordinate speed (constant change in space coordinate with increasing time coordinate). The more the line is tilted from the vertical, the larger the speed.
Two world lines that start out separately and then intersect, signify a collision or "encounter". Two world lines starting at the same event in spacetime, each following its own path afterwards, may represent e.g. the decay of a particle into two others or the emission of one particle by another.
World lines of a particle and an observer may be interconnected with the world line of a photon (the path of light) and form a diagram depicting the emission of a photon by a particle that is subsequently observed by the observer (or absorbed by another particle).
The four coordinate functions x a ( τ ) , a = 0 , 1 , 2 , 3 {\displaystyle x^{a}(\tau ),\;a=0,1,2,3} defining a world line, are real number functions of a real variable τ {\displaystyle \tau } and can simply be differentiated by the usual calculus. Without the existence of a metric (this is important to realize) one can imagine the difference between a point p {\displaystyle p} on the curve at the parameter value τ 0 {\displaystyle \tau _{0}} and a point on the curve a little (parameter τ 0 + Δ τ {\displaystyle \tau _{0}+\Delta \tau } ) farther away. In the limit Δ τ → 0 {\displaystyle \Delta \tau \to 0} , this difference divided by Δ τ {\displaystyle \Delta \tau } defines a vector, the tangent vector of the world line at the point p {\displaystyle p} . It is a four-dimensional vector, defined in the point p {\displaystyle p} . It is associated with the normal 3-dimensional velocity of the object (but it is not the same) and therefore termed four-velocity v → {\displaystyle {\vec {v}}} , or in components: v → = ( v 0 , v 1 , v 2 , v 3 ) = ( d x 0 d τ , d x 1 d τ , d x 2 d τ , d x 3 d τ ) {\displaystyle {\vec {v}}=\left(v^{0},v^{1},v^{2},v^{3}\right)=\left({\frac {dx^{0}}{d\tau }}\;,{\frac {dx^{1}}{d\tau }}\;,{\frac {dx^{2}}{d\tau }}\;,{\frac {dx^{3}}{d\tau }}\right)}
such that the derivatives are taken at the point p {\displaystyle p} , so at τ = τ 0 {\displaystyle \tau =\tau _{0}} .
All curves through point p have a tangent vector, not only world lines. The sum of two vectors is again a tangent vector to some other curve and the same holds for multiplying by a scalar. Therefore, all tangent vectors for a point p span a linear space , termed the tangent space at point p. For example, taking a 2-dimensional space, like the (curved) surface of the Earth, its tangent space at a specific point would be the flat approximation of the curved space.
So far a world line (and the concept of tangent vectors) has been described without a means of quantifying the interval between events. The basic mathematics is as follows: The theory of special relativity puts some constraints on possible world lines. In special relativity the description of spacetime is limited to special coordinate systems that do not accelerate (and so do not rotate either), termed inertial coordinate systems . In such coordinate systems, the speed of light is a constant. The structure of spacetime is determined by a bilinear form η, which gives a real number for each pair of events. The bilinear form is sometimes termed a spacetime metric , but since distinct events sometimes result in a zero value, unlike metrics in metric spaces of mathematics, the bilinear form is not a mathematical metric on spacetime.
World lines of freely falling particles/objects are called geodesics . In special relativity these are straight lines in Minkowski space .
Often the time units are chosen such that the speed of light is represented by lines at a fixed angle, usually at 45 degrees, forming a cone with the vertical (time) axis. In general, useful curves in spacetime can be of three types (the other types would be partly one, partly another type):
At a given event on a world line, spacetime ( Minkowski space ) is divided into three parts.
Since a world line w ( τ ) ∈ R 4 {\displaystyle w(\tau )\in R^{4}} determines a velocity 4-vector v = d w d τ {\displaystyle v={\frac {dw}{d\tau }}} that is time-like, the Minkowski form η ( v , x ) {\displaystyle \eta (v,x)} determines a linear function R 4 → R {\displaystyle R^{4}\rightarrow R} by x ↦ η ( v , x ) . {\displaystyle x\mapsto \eta (v,x).} Let N be the null space of this linear functional. Then N is called the simultaneous hyperplane with respect to v . The relativity of simultaneity is a statement that N depends on v . Indeed, N is the orthogonal complement of v with respect to η.
When two world lines u and w are related by d u d τ = d w d τ , {\displaystyle {\frac {du}{d\tau }}={\frac {dw}{d\tau }},} then they share the same simultaneous hyperplane. This hyperplane exists mathematically, but physical relations in relativity involve the movement of information by light. For instance, the traditional electro-static force described by Coulomb's law may be pictured in a simultaneous hyperplane, but relativistic relations of charge and force involve retarded potentials .
The use of world lines in general relativity is basically the same as in special relativity , with the difference that spacetime can be curved . A metric exists and its dynamics are determined by the Einstein field equations and are dependent on the mass-energy distribution in spacetime. Again the metric defines lightlike (null), spacelike , and timelike curves. Also, in general relativity, world lines include timelike curves and null curves in spacetime, where timelike curves fall within the lightcone. However, a lightcone is not necessarily inclined at 45 degrees to the time axis. However, this is an artifact of the chosen coordinate system, and reflects the coordinate freedom ( diffeomorphism invariance ) of general relativity. Any timelike curve admits a comoving observer whose "time axis" corresponds to that curve, and, since no observer is privileged, we can always find a local coordinate system in which lightcones are inclined at 45 degrees to the time axis. See also for example Eddington-Finkelstein coordinates .
World lines of free-falling particles or objects (such as planets around the Sun or an astronaut in space) are called geodesics .
Quantum field theory, the framework in which all of modern particle physics is described, is usually described as a theory of quantized fields. However, although not widely appreciated, it has been known since Feynman [ 2 ] that many quantum field theories may equivalently be described in terms of world lines. This preceded much of his work [ 3 ] on the formulation which later became more standard. The world line formulation of quantum field theory has proved particularly fruitful for various calculations in gauge theories [ 4 ] [ 5 ] [ 6 ] and in describing nonlinear effects of electromagnetic fields. [ 7 ] [ 8 ]
In 1884 C. H. Hinton wrote an essay "What is the fourth dimension ?", which he published as a scientific romance . He wrote
A popular description of human world lines was given by J. C. Fields at the University of Toronto in the early days of relativity. As described by Toronto lawyer Norman Robertson:
Kurt Vonnegut, in his novel Slaughterhouse-Five , describes the worldlines of stars and people:
Almost all science-fiction stories which use this concept actively, such as to enable time travel , oversimplify this concept to a one-dimensional timeline to fit a linear structure, which does not fit models of reality. Such time machines are often portrayed as being instantaneous, with its contents departing one time and arriving in another—but at the same literal geographic point in space. This is often carried out without note of a reference frame, or with the implicit assumption that the reference frame is local; as such, this would require either accurate teleportation, as a rotating planet, being under acceleration, is not an inertial frame, or for the time machine to remain in the same place, its contents 'frozen'.
Author Oliver Franklin published a science fiction work in 2008 entitled World Lines in which he related a simplified explanation of the hypothesis for laymen. [ 11 ]
In the short story Life-Line , author Robert A. Heinlein describes the world line of a person: [ 12 ]
Heinlein's Methuselah's Children uses the term, as does James Blish 's The Quincunx of Time (expanded from "Beep").
A visual novel named Steins;Gate , produced by 5pb. , tells a story based on the shifting of world lines. Steins;Gate is a part of the " Science Adventure " series. World lines and other physical concepts like the Dirac Sea are also used throughout the series.
Neal Stephenson 's novel Anathem involves a long discussion of worldlines over dinner in the midst of a philosophical debate between Platonic realism and nominalism .
Absolute Choice depicts different world lines as a sub-plot and setting device.
A space armada trying to complete a (nearly) closed time-like path as a strategic maneuver forms the backdrop and a main plot device of "Singularity Sky" by Charles Stross .
|
https://en.wikipedia.org/wiki/World_line
|
In gravitation theory , a world manifold endowed with some Lorentzian pseudo-Riemannian metric and an associated space-time structure is a space-time . Gravitation theory is formulated as classical field theory on natural bundles over a world manifold.
A world manifold is a four-dimensional orientable real smooth manifold . It is assumed to be a Hausdorff and second countable topological space . Consequently, it is a locally compact space which is a union of a countable number of compact subsets, a separable space , a paracompact and completely regular space . Being paracompact, a world manifold admits a partition of unity by smooth functions. Paracompactness is an essential characteristic of a world manifold. It is necessary and sufficient in order that a world manifold admits a Riemannian metric and necessary for the existence of a pseudo-Riemannian metric. A world manifold is assumed to be connected and, consequently, it is arcwise connected .
The tangent bundle T X {\displaystyle TX} of a world manifold X {\displaystyle X} and the associated principal frame bundle F X {\displaystyle FX} of linear tangent frames in T X {\displaystyle TX} possess a general linear group structure group G L + ( 4 , R ) {\displaystyle GL^{+}(4,\mathbb {R} )} . A world manifold X {\displaystyle X} is said to be parallelizable if the tangent bundle T X {\displaystyle TX} and, accordingly, the frame bundle F X {\displaystyle FX} are trivial, i.e., there exists a global section (a frame field ) of F X {\displaystyle FX} . It is essential that the tangent and associated bundles over a world manifold admit a bundle atlas of finite number of trivialization charts.
Tangent and frame bundles over a world manifold are natural bundles characterized by general covariant transformations . These transformations are gauge symmetries of gravitation theory on a world manifold.
By virtue of the well-known theorem on structure group reduction , a structure group G L + ( 4 , R ) {\displaystyle GL^{+}(4,\mathbb {R} )} of a frame bundle F X {\displaystyle FX} over a world manifold X {\displaystyle X} is always reducible to its maximal compact subgroup S O ( 4 ) {\displaystyle SO(4)} . The corresponding global section of the quotient bundle F X / S O ( 4 ) {\displaystyle FX/SO(4)} is a Riemannian metric g R {\displaystyle g^{R}} on X {\displaystyle X} . Thus, a world manifold always admits a Riemannian metric which makes X {\displaystyle X} a metric topological space .
In accordance with the geometric Equivalence Principle , a world manifold possesses a Lorentzian structure , i.e., a structure group of a frame bundle F X {\displaystyle FX} must be reduced to a Lorentz group S O ( 1 , 3 ) {\displaystyle SO(1,3)} . The corresponding global section of the quotient bundle F X / S O ( 1 , 3 ) {\displaystyle FX/SO(1,3)} is a pseudo-Riemannian metric g {\displaystyle g} of signature ( + , − − − ) {\displaystyle (+,---)} on X {\displaystyle X} . It is treated as a gravitational field in General Relativity and as a classical Higgs field in gauge gravitation theory .
A Lorentzian structure need not exist. Therefore, a world manifold is assumed to satisfy a certain topological condition. It is either a noncompact topological space or a compact space with a zero Euler characteristic . Usually, one also requires that a world manifold admits a spinor structure in order to describe Dirac fermion fields in gravitation theory. There is the additional topological obstruction to the existence of this structure. In particular, a noncompact world manifold must be parallelizable.
If a structure group of a frame bundle F X {\displaystyle FX} is reducible to a Lorentz group, the latter is always reducible to its maximal compact subgroup S O ( 3 ) {\displaystyle SO(3)} . Thus, there is the commutative diagram
of the reduction of structure groups of a frame bundle F X {\displaystyle FX} in
gravitation theory. This reduction diagram results in the following.
(i) In gravitation theory on a world manifold X {\displaystyle X} , one can always choose an atlas of a frame bundle F X {\displaystyle FX} (characterized by local frame fields { h λ } {\displaystyle \{h^{\lambda }\}} ) with S O ( 3 ) {\displaystyle SO(3)} -valued transition functions. These transition functions preserve a time-like component h 0 = h 0 μ ∂ μ {\displaystyle h_{0}=h_{0}^{\mu }\partial _{\mu }} of local frame fields which, therefore, is globally defined. It is a nowhere vanishing vector field on X {\displaystyle X} . Accordingly, the dual time-like covector field h 0 = h λ 0 d x λ {\displaystyle h^{0}=h_{\lambda }^{0}dx^{\lambda }} also is globally defined, and it yields a spatial distribution F ⊂ T X {\displaystyle {\mathfrak {F}}\subset TX} on X {\displaystyle X} such that h 0 ⌋ F = 0 {\displaystyle h^{0}\rfloor {\mathfrak {F}}=0} . Then the tangent bundle T X {\displaystyle TX} of a world manifold X {\displaystyle X} admits a space-time decomposition T X = F ⊕ T 0 X {\displaystyle TX={\mathfrak {F}}\oplus T^{0}X} , where T 0 X {\displaystyle T^{0}X} is a one-dimensional fibre bundle spanned by a time-like vector field h 0 {\displaystyle h_{0}} . This decomposition, is called the g {\displaystyle g} -compatible space-time structure . It makes a world manifold the space-time.
(ii) Given the above-mentioned diagram of reduction of structure groups, let g {\displaystyle g} and g R {\displaystyle g^{R}} be the corresponding
pseudo-Riemannian and Riemannian metrics on X {\displaystyle X} . They form a triple ( g , g R , h 0 ) {\displaystyle (g,g^{R},h^{0})} obeying the relation
Conversely, let a world manifold X {\displaystyle X} admit a nowhere vanishing
one-form σ {\displaystyle \sigma } (or, equivalently, a nowhere vanishing vector
field). Then any Riemannian metric g R {\displaystyle g^{R}} on X {\displaystyle X} yields the
pseudo-Riemannian metric
It follows that a world manifold X {\displaystyle X} admits a pseudo-Riemannian
metric if and only if there exists a nowhere vanishing vector (or covector) field on X {\displaystyle X} .
Let us note that a g {\displaystyle g} -compatible Riemannian metric g R {\displaystyle g^{R}} in a triple ( g , g R , h 0 ) {\displaystyle (g,g^{R},h^{0})} defines a g {\displaystyle g} -compatible distance function on a world manifold X {\displaystyle X} . Such a function brings X {\displaystyle X} into a metric space whose locally Euclidean topology is equivalent to a manifold topology on X {\displaystyle X} . Given a gravitational field g {\displaystyle g} , the g {\displaystyle g} -compatible Riemannian metrics and the corresponding distance
functions are different for different spatial distributions F {\displaystyle {\mathfrak {F}}} and F ′ {\displaystyle {\mathfrak {F}}'} . It follows that physical observers associated with
these different spatial distributions perceive a world manifold X {\displaystyle X} as different Riemannian spaces. The well-known relativistic changes of sizes of moving bodies exemplify this phenomenon.
However, one attempts to derive a world topology directly from a space-time structure (a path topology , an Alexandrov topology ). If a space-time satisfies the strong causality condition , such topologies coincide with a familiar manifold topology of a world manifold. In a general case, they however are rather extraordinary.
A space-time structure is called integrable if a spatial distribution F {\displaystyle {\mathfrak {F}}} is involutive. In this case, its integral manifolds constitute a spatial foliation of a world manifold whose leaves are spatial three-dimensional subspaces. A spatial foliation is called causal if no curve transversal to its leaves intersects each leave more than once. This condition is equivalent to the stable causality of Stephen Hawking . A space-time foliation is causal if and only if it is a foliation of level surfaces of some smooth real function on X {\displaystyle X} whose differential nowhere vanishes. Such a foliation is a fibred manifold X → R {\displaystyle X\to \mathbb {R} } .
However, this is not the case of a compact world manifold which can not be
a fibred manifold over R {\displaystyle \mathbb {R} } .
The stable causality does not provide the simplest causal structure. If a fibred manifold X → R {\displaystyle X\to \mathbb {R} } is a fibre bundle, it is trivial, i.e., a world manifold X {\displaystyle X} is a globally hyperbolic manifold X = R × M {\displaystyle X=\mathbb {R} \times M} . Since any oriented three-dimensional manifold is parallelizable, a globally
hyperbolic world manifold is parallelizable.
|
https://en.wikipedia.org/wiki/World_manifold
|
Worldchanging: A User's Guide for the 21st Century is a book about environmental concerns and practical actual responses. It is a compendium of the solutions, ideas and inventions emerging today for building a sustainable , livable, prosperous future. [ 1 ] In November 2006, Worldchanging published a survey of global innovation, with a foreword by Al Gore , design by Stefan Sagmeister and an introduction by Bruce Sterling . It has received praise, was a winner of the 2007 "Green Prize" for sustainability literature, [ 2 ] and is being translated into French under the title Change Le Monde , [ 3 ] German and several other languages. [ 4 ] Harry N. Abrams, Inc., the publisher of the hardcover edition, listed it among their 50 best selling titles in July 2008. [ citation needed ]
The book was mentioned by BusinessWeek as one of the "Best Innovation and Design Books for 2006". [ 5 ]
The updated version was published April 1, 2011, called Worldchanging, Revised Edition: A User's Guide for the 21st Century ISBN 978-0-8109-9746-2 . Alex Steffen is the author, Bill McKibben wrote the Introduction, and Van Jones wrote the Foreword. Sagmeister Inc. was the designer. [ citation needed ]
This article about environmental social science is a stub . You can help Wikipedia by expanding it .
This article about a book on the environment is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Worldchanging_(book)
|
The worm-like chain ( WLC ) model in polymer physics is used to describe the behavior of polymers that are semi-flexible: fairly stiff with successive segments pointing in roughly the same direction, and with persistence length within a few orders of magnitude of the polymer length. The WLC model is the continuous version of the Kratky – Porod model.
The WLC model envisions a continuously flexible isotropic rod. [ 1 ] [ 2 ] [ 3 ] This is in contrast to the freely-jointed chain model, which is only flexible between discrete freely hinged segments. The model is particularly suited for describing stiffer polymers, with successive segments displaying a sort of cooperativity: nearby segments are roughly aligned. At room temperature, the polymer adopts a smoothly curved conformation; at T = 0 {\displaystyle T=0} K, the polymer adopts a rigid rod conformation. [ 1 ]
For a polymer of maximum length L 0 {\displaystyle L_{0}} , parametrize the path of the polymer as s ∈ ( 0 , L 0 ) {\displaystyle s\in (0,L_{0})} . Allow t ^ ( s ) {\displaystyle {\hat {t}}(s)} to be the unit tangent vector to the chain at point s {\displaystyle s} , and r → ( s ) {\displaystyle {\vec {r}}(s)} to be the position vector along the chain, as shown to the right. Then:
The energy associated with the bending of the polymer can be written as:
E = 1 2 k B T ∫ 0 L 0 P ⋅ ( ∂ 2 r → ( s ) ∂ s 2 ) 2 d s {\displaystyle E={\frac {1}{2}}k_{B}T\int _{0}^{L_{0}}P\cdot \left({\frac {\partial ^{2}{\vec {r}}(s)}{\partial s^{2}}}\right)^{2}ds}
where P {\displaystyle P} is the polymer's characteristic persistence length , k B {\displaystyle k_{B}} is the Boltzmann constant , and T {\displaystyle T} is the absolute temperature. At finite temperatures, the end-to end distance of the polymer will be significantly shorter than the maximum length L 0 {\displaystyle L_{0}} . This is caused by thermal fluctuations, which result in a coiled, random configuration of the undisturbed polymer.
The polymer's orientation correlation function can then be solved for, and it follows an exponential decay with decay constant 1/P: [ 1 ] [ 3 ]
⟨ t ^ ( s ) ⋅ t ^ ( 0 ) ⟩ = ⟨ cos θ ( s ) ⟩ = e − s / P {\displaystyle \langle {\hat {t}}(s)\cdot {\hat {t}}(0)\rangle =\langle \cos \;\theta (s)\rangle =e^{-s/P}\,}
A useful value is the mean square end-to-end distance of the polymer: [ 1 ] [ 3 ]
⟨ R 2 ⟩ = ⟨ R → ⋅ R → ⟩ = ⟨ ∫ 0 L 0 t ^ ( s ) d s ⋅ ∫ 0 L 0 t ^ ( s ′ ) d s ′ ⟩ = ∫ 0 L 0 d s ∫ 0 L 0 ⟨ t ^ ( s ) ⋅ t ^ ( s ′ ) ⟩ d s ′ = ∫ 0 L 0 d s ∫ 0 L 0 e − | s − s ′ | / P d s ′ {\displaystyle \langle R^{2}\rangle =\langle {\vec {R}}\cdot {\vec {R}}\rangle =\left\langle \int _{0}^{L_{0}}{\hat {t}}(s)ds\cdot \int _{0}^{L_{0}}{\hat {t}}(s')ds'\right\rangle =\int _{0}^{L_{0}}ds\int _{0}^{L_{0}}\langle {\hat {t}}(s)\cdot {\hat {t}}(s')\rangle ds'=\int _{0}^{L_{0}}ds\int _{0}^{L_{0}}e^{-\left|s-s'\right|/P}ds'}
⟨ R 2 ⟩ = 2 P L 0 [ 1 − P L 0 ( 1 − e − L 0 / P ) ] {\displaystyle \langle R^{2}\rangle =2PL_{0}\left[1-{\frac {P}{L_{0}}}\left(1-e^{-L_{0}/P}\right)\right]}
Note that in the limit of L 0 ≫ P {\displaystyle L_{0}\gg P} , then ⟨ R 2 ⟩ = 2 P L 0 {\displaystyle \langle R^{2}\rangle =2PL_{0}} . This can be used to show that a Kuhn segment is equal to twice the persistence length of a worm-like chain. In the limit of L 0 ≪ P {\displaystyle L_{0}\ll P} , then ⟨ R 2 ⟩ = L 0 2 {\displaystyle \langle R^{2}\rangle =L_{0}^{2}} , and the polymer displays rigid rod behavior. [ 2 ] The figure to the right shows the crossover from flexible to stiff behavior as the persistence length increases.
Experimental data from the stretching of Lambda phage DNA is shown to the right, with force measurements determined by analysis of Brownian fluctuations of a bead attached to the DNA. A persistence length of 51.35 nm and a contour length of 1560.9 nm were used for the model, which is depicted by the solid line. [ 4 ]
Other biologically important polymers that can be effectively modeled as worm-like chains include:
Upon stretching, the accessible spectrum of thermal fluctuations reduces, which causes an entropic force acting against the external elongation.
This entropic force can be estimated from considering the total energy of the polymer:
H = H e l a s t i c + H e x t e r n a l = 1 2 k B T ∫ 0 L 0 P ⋅ ( ∂ 2 r → ( s ) ∂ s 2 ) 2 d s − x F {\displaystyle H=H_{\rm {elastic}}+H_{\rm {external}}={\frac {1}{2}}k_{B}T\int _{0}^{L_{0}}P\cdot \left({\frac {\partial ^{2}{\vec {r}}(s)}{\partial s^{2}}}\right)^{2}ds-xF} .
Here, the contour length is represented by L 0 {\displaystyle L_{0}} , the persistence length by P {\displaystyle P} , the extension is represented by x {\displaystyle x} , and external force is represented by F {\displaystyle F} .
Laboratory tools such as atomic force microscopy (AFM) and optical tweezers have been used to characterize the force-dependent stretching behavior of biological polymers. An interpolation formula that approximates the force-extension behavior with about 15% relative error is: [ 11 ]
A more accurate approximation for the force-extension behavior with about 0.01% relative error is: [ 4 ]
with α 2 = − 0.5164228 {\displaystyle \alpha _{2}=-0.5164228} , α 3 = − 2.737418 {\displaystyle \alpha _{3}=-2.737418} , α 4 = 16.07497 {\displaystyle \alpha _{4}=16.07497} , α 5 = − 38.87607 {\displaystyle \alpha _{5}=-38.87607} , α 6 = 39.49944 {\displaystyle \alpha _{6}=39.49944} , α 7 = − 14.17718 {\displaystyle \alpha _{7}=-14.17718} .
A simple and accurate approximation for the force-extension behavior with about 1% relative error is: [ 12 ]
Approximation for the extension-force behavior with about 1% relative error was also reported: [ 12 ]
The elastic response from extension cannot be neglected: polymers elongate due to external forces. This enthalpic compliance is accounted for the material parameter K 0 {\displaystyle K_{0}} , and the system yields the following Hamiltonian for significantly extended polymers:
H = H e l a s t i c + H e n t h a l p i c + H e x t e r n a l = 1 2 k B T ∫ 0 L 0 P ⋅ ( ∂ 2 r → ( s ) ∂ s 2 ) 2 d s + 1 2 K 0 L 0 x 2 − x F {\displaystyle H=H_{\rm {elastic}}+H_{\rm {enthalpic}}+H_{\rm {external}}={\frac {1}{2}}k_{B}T\int _{0}^{L_{0}}P\cdot \left({\frac {\partial ^{2}{\vec {r}}(s)}{\partial s^{2}}}\right)^{2}ds+{\frac {1}{2}}{\frac {K_{0}}{L_{0}}}x^{2}-xF} ,
This expression contains both the entropic term, which describes changes in the polymer conformation, and the enthalpic term, which describes the stretching of the polymer due to the external force.
Several approximations for the force-extension behavior have been put forward, depending on the applied external force. These approximations are made for stretching DNA in physiological conditions (near neutral pH, ionic strength approximately 100 mM, room temperature), with stretch modulus around 1000 pN. [ 13 ] [ 14 ]
For the low-force regime (F < about 10 pN), the following interpolation formula was derived: [ 15 ]
F P k B T = 1 4 ( 1 − x L 0 + F K 0 ) − 2 − 1 4 + x L 0 − F K 0 {\displaystyle {\frac {FP}{k_{B}T}}={\frac {1}{4}}\left(1-{\frac {x}{L_{0}}}+{\frac {F}{K_{0}}}\right)^{-2}-{\frac {1}{4}}+{\frac {x}{L_{0}}}-{\frac {F}{K_{0}}}} .
For the higher-force regime, where the polymer is significantly extended, the following approximation is valid: [ 16 ]
x = L 0 ( 1 − 1 2 ( k B T F P ) 1 / 2 + F K 0 ) {\displaystyle x=L_{0}\left(1-{\frac {1}{2}}\left({\frac {k_{B}T}{FP}}\right)^{1/2}+{\frac {F}{K_{0}}}\right)} .
As for the case without extension, a more accurate formula was derived: [ 4 ]
F P k B T = 1 4 ( 1 − l ) − 2 − 1 4 + l + ∑ i = 2 i = 7 α i ( l ) i {\displaystyle {\frac {FP}{k_{B}T}}={\frac {1}{4}}\left(1-l\right)^{-2}-{\frac {1}{4}}+l+\sum _{i=2}^{i=7}\alpha _{i}\left(l\right)^{i}} ,
with l = x L 0 − F K 0 {\displaystyle l={\frac {x}{L_{0}}}-{\frac {F}{K_{0}}}} . The α i {\displaystyle \alpha _{i}} coefficients are the same as the above described formula for the WLC model without elasticity.
Accurate and simple interpolation formulas for the force-extension and extension-force behaviors for the extensible worm-like chain model are: [ 12 ]
|
https://en.wikipedia.org/wiki/Worm-like_chain
|
WormBook is an open access, comprehensive collection of original, peer-reviewed chapters covering topics related to the biology of the nematode worm Caenorhabditis elegans (C. elegans) . WormBook also includes WormMethods, an up-to-date collection of methods and protocols for C. elegans researchers. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
WormBook is the online text companion to WormBase , [ 5 ] the C. elegans model organism database . Capitalizing on the World Wide Web , WormBook links in-text references (e.g. genes , alleles, proteins , literature citations) with primary biological databases such as WormBase and PubMed . C. elegans was the first multicellular organism to have its genome sequenced [ 6 ] and is a model organism for studying developmental genetics and neurobiology.
The content of WormBook is categorized into the sections listed below, each filled with a variety of relevant chapters. These sections include:
|
https://en.wikipedia.org/wiki/WormBook
|
Worm is a self-published web serial by John C. "Wildbow" McCrae and the first installment of the Parahumans series , known for subverting and playing with common tropes and themes of superhero fiction . It was McCrae's first novel. [ 4 ] Worm features a bullied teenage girl, Taylor Hebert, who develops the superpower to control worms , insects , arachnids , and other simple lifeforms. [ 5 ] [ 6 ] Using a combination of ingenuity, idealism, and brutality, she struggles to do the right thing in a dark world filled with moral ambiguity. [ 7 ] [ 8 ] It is one of the most popular web serials on the internet, [ 9 ] [ 10 ] with a readership in the hundreds of thousands. [ 2 ] A sequel , titled Ward , was published from November 2017 to May 2020.
Worm was first published as an online serial with two to three chapters released every week. It began online publishing in June 2011 and continued until November 2013, [ 5 ] [ 11 ] totaling approximately 1,682,400 words.
The story was written at a rate comparable to a traditional book being published every month. [ 10 ] It followed a strict publication schedule, [ 10 ] [ 4 ] with new chapters released every Tuesday and Saturday, and bonus chapters on Thursdays as rewards for donations. [ 12 ] These chapters were arranged into 31 arcs, each of which covered a specific series of events over six to 12 chapters and concluded with an interlude from the point of view of a side character.
In contrast to traditional publishing, which follows a short-term model, Worm ' s readership grew slowly but steadily, beginning with 13 views in June 2011, the month the serial began; 26,844 monthly views a year later; and 207,833 monthly views by June 2013. Views peaked at 1,390,648 in November 2013, when the book ended, and remained steady with 693,675 monthly views even five years later. [ 13 ] [ 14 ]
As of 2019 [update] , Worm is being edited, and McCrae plans to produce both an eBook version and a physical book via traditional publishing. [ 10 ] [ 5 ]
Worm is set in a fictional, alternate universe known as "Earth Bet ". The events of Earth Bet closely follow that of our own Earth until a naked, golden man named Scion appears over the ocean in 1982. Following his appearance, a fraction of humans gain superpowers if placed in a traumatic and stressful situation, known in-story as a "trigger event".
The arrival of "parahumans" in 1982 leads to a "Golden Age of Heroism", during which the majority of people with powers work for the public good. In 1989, after a parahuman dies trying to prevent a riot, superpowered serial killers, thieves, cults, and gang members begin to increasingly threaten public safety. Governments worldwide create agencies to counter parahuman criminals, including the Parahuman Response Team (PRT), in Canada and the United States.
In 1992, a giant monster launches a devastating attack on the Marun Field , Iran . To adequately prepare humanity for future attacks and to manage the growing villain population, four prominent heroes form the Protectorate, an organization subordinate to the PRT dedicated to cooperation among superheroes. Two more monsters (collectively referred to as "Endbringers") appear and begin attacking cities across the planet, causing the loss of millions of lives, as well as catastrophic and irreversible economic and geographic damage. The PRT and Protectorate are forced to treat villains more leniently in return for assistance in fighting Endbringers. Mired in bureaucracy and politics, the PRT is increasingly unable to cope with the growing frequency and brutality of parahuman crimes.
The story is set in the fictitious city of Brockton Bay, a formerly wealthy port that has severely declined after Endbringer attacks led to the collapse of the shipping industry. Due to the poverty of the area, it has a higher number of parahumans per capita than any other American city, and a number of superhuman-led gangs vie for control over the city's criminal enterprises.
Individuals who possess powers in Worm are referred to formally as "parahumans" and informally as "capes", a term referencing the general habit of parahumans to establish an alter ego and go out in costume (with or without literal "capes").
In order to acquire a power in Worm , one must suffer a trigger event: a moment of severe physical or psychological trauma. The properties of the power are influenced by the nature of the threat the individual faces and the individual's thought process at the time. Some parahumans, when placed under extreme stress reminiscent or beyond that of their initial trigger event may trigger a second time, expanding and refining their powers to increase their odds of survival.
In rare cases, multiple individuals may suffer a simultaneous trigger event known as a "cluster trigger". Cluster triggers often cause complicated and intense interpersonal relationships within the cluster, often referred to as the Kiss/Kill dynamic.
Individuals who have spent multiple years around capes will, on occasion, trigger with a similar power. These individuals are generally referred to as "second generation capes" as many are children of previous parahumans, although powers are not, strictly speaking, heritable. The mechanics behind how powers work is gradually explained throughout the story.
Alternatively, powers can be acquired through Cauldron, a shadowy and secretive group which has discovered a method to artificially induce powers. This process carries a significant risk of deformation or mutilation, with the strength or safety of powers acquired varying greatly. Individuals whose Cauldron-acquired powers result in noticeable or grotesque mutation are referred to as "Case-53s" or "monster capes".
Powers in Worm obey several arbitrary constraints. All powers can be used aggressively, regardless of manifestation. Thus, healers are incredibly rare and often as a byproduct of another power, or have other abilities suited towards combat. Another limitation is the "Manton effect": a parahuman's abilities very rarely affect both organic and inorganic material, and parahumans are instinctively protected from their own power – i.e., a power may affect the landscape, but not the people within it, or vice versa. There are instances within the setting of parahumans who have been able to bypass this effect, although the circumstances which enable this are typically difficult or dangerous to replicate, depend heavily on the individual in question and tend to suffer debilitating psychological and physiological effects.
Taylor Hebert is a 15-year-old parahuman who has developed the power to sense and control insects and other small invertebrates following a traumatic event at the hands of bullies. She lives in the fictional city of Brockton Bay, a hotspot of parahuman activity, and seeks to become a superhero . On her first night out in costume, she defeats a superpowered gang leader and is subsequently mistaken for a villain by a team of teenage parahuman thieves known as the Undersiders, who work jobs for a mysterious benefactor. Taylor joins the team, hoping to learn the identity of their boss before turning them into the authorities. However, Taylor grows increasingly close to the Undersiders, whilst having repeatedly poor run-ins with the Parahuman Response Team (PRT), the United States of America's parahuman law enforcement agency, and the superheroes of the PRT's sister organization, the Protectorate. She ultimately finds herself unable to betray the Undersiders and becomes fully committed to them, adopting the moniker "Skitter" and abandoning her dream of becoming a superhero. After a job, Taylor learns that the Undersiders have unwittingly assisted their patron, the gang lord known as Coil, in the kidnapping of Dinah, a girl with powerful precognitive powers, and is wracked with guilt over her involvement.
In part due to violence initiated by the defeat of various gangs by the Undersiders, Brockton Bay experiences a period of instability. This culminates in an attack by Leviathan, one of three powerful monsters collectively called the Endbringers, which devastates the city. In the aftermath, Coil directs the Undersiders and a group of contracted villains, the Travelers, in seizing territory and they begin to operate as makeshift warlords in the ruined city. Privately, Taylor and the Undersiders plot to depose Coil if they cannot secure Dinah's release. The Travelers struggle to find a cure for their fifth teammate, Noelle, whose condition is undisclosed and is living in a highly secure vault in Coil’s base. When Jack Slash, the theatrical leader of a notorious gang of parahuman serial killers known as the Slaughterhouse Nine, invades Brockton Bay, Dinah predicts he will bring about the end of the world in two years if not stopped. The city weathers the incursion, but its parahumans fail to kill either Jack or his prized protege Bonesaw, a young girl kidnapped and moulded by the gang of serial killers. In the process of escaping the city, Jack learns of Dinah's prophecy and decides to fulfill it and end the world.
Coil, valuing Dinah's precognitive abilities too highly to consider her release, attempts to assassinate Taylor when she pushes the issue. She survives and the Undersiders kill Coil in retaliation. However, as a final act of vengeance, Coil unleashes Noelle (now called “Echidna”), who is revealed to be a monstrous parahuman with power and lethality on par with the Endbringers. The ensuing battle between Echidna and the desperate alliance comprising the Undersiders and the heroes devastates the already-ruined city even further, and lack of cohesion between the Undersiders and the heroes (such as the heroes ignoring the Undersiders' warnings that Echidna can clone parahumans) worsens the conflict as well. After Echidna is defeated, Dinah is later returned to her remaining family, and the Undersiders seize control of the remnants of Coil's criminal empire, fully entrenching themselves as the shadowy rulers of Brockton Bay, though it remains ostensibly governed by the United States. Together, Taylor and the Undersiders carefully balance staving off attempts by regional criminal organizations to establish footholds in their city with scuffles against the legal authority of the city and assisting the remaining civilian population.
Tensions with the authorities later come to a head when Protectorate heroes arrive at a school Taylor is visiting in an attempt to arrest her, publicly revealing her identity as Skitter in ensuing standoff. Taylor is even further dismayed when she is informed by the heroes that Dinah had turned on her and was voluntarily aiding them in their operation to capture her. Despite the overwhelming advantage that Dinah's abilities had given the heroes, Taylor leverages her reputation and the Protectorate's dwindling popularity to convince almost a hundred students to help her escape, and flees to the safety of her territory. The Undersiders then continue with their operations in Brockton Bay, later carrying out an attack against the PRT local headquarters in retaliation for outing Skitter and as a show of force to deter rival gangs from action.
Taylor later learns from Dinah that the odds of averting the end of the world would increase if she is taken by the authorities. In response, Taylor devises a daring plan to conditionally surrender to the PRT and Protectorate, in the hopes of gaining a position to force meaningful change in the declining organizations before Dinah's deadline, as well as sparing the rest of the Undersiders from continued persecution by the authorities. The plan nearly fails when Alexandria, one of the world's most powerful heroes and secretly the PRT's Chief Director, attempts to force Taylor into accepting less-than-favorable demands by executing Undersiders one by one until she submits, prompting Taylor to kill her and others in a blind rage. Taylor later tries to escape in the aftermath of her attack, but is pursued by Protectorate heroes and learns from them that Alexandria had tricked her into believing that she was killing her friends. Reluctantly, Taylor agrees to surrender when the heroes accept most of her terms. Alexandria is publicly framed as a villain whose death at Skitter's hand had been necessary, and Taylor becomes a new probationary superhero "Weaver."
After being tried and convicted as an adult , Taylor leaves Brockton Bay for the city of Chicago and is assigned to the local Wards, a team of teenage superheroes attached as a youth group to the Chicago Protectorate. Unsurprisingly, Weaver chafes under the restrictions imposed by her new superiors, with the Protectorate heroes and PRT officials naturally reticent and distrustful towards her (with some among the latter group even making veiled efforts to sabotage her), and with Weaver feeling less productive in comparison to when she was a supervillain. She also finds herself missing the Undersiders dearly, but is prohibited from having any contact with them due to the conditions of her probation, and fights the urge to return home due to the importance placed on adhering Dinah's predictions. Not long into Weaver's trial period with the Chicago Wards, Behemoth, another of the three Endbringers, surfaces and begins attacking New Delhi, India. The battle turns favorably for the capes and miraculously concludes in Behemoth's death. Weaver is instrumental in many capes surviving the conflict and Behemoth's demise, earning her popular renown and recognition as a genuine superhero, as well as cementing her place in the Wards.
Over the next two years, Taylor continues serving out her sentence with the Chicago Wards. When Dinah's deadline passes, Jack Slash resurfaces with an army of cloned members (and former members) of the Slaughterhouse Nine, forcing Weaver and her allies to go to war against Jack to prevent the apocalypse. Though they manage to hold off Jack and his army, Jack manages to reach Scion, the most powerful superhero of all, and convinces him to begin an apocalyptic, interdimensional rampage that would become known as "Gold Morning" - the prophesied end of the world. Billions are killed across the multiverse over the course of Gold Morning, with Scion overpowering the parahuman defenders with great ease. It is revealed that Scion is the avatar of an unfathomably powerful alien entity referred to as The Warrior, responsible for seeding the planet with superpowers in the first place, as part of his species' reproductive cycle; a cycle that would end in the destruction of the planet and all its alternate-dimension counterparts. One of the most powerful capes, Eidolon, is revealed to have unintentionally created the Endbringers before he is killed. The Undersiders find them and convince them to aid in the battle.
Taylor, in a desperate bid to defeat Scion, has her power surgically altered to control humans as well as insects, becoming “Khepri.” The plan succeeds, with Khepri gaining complete mental domination over the trans-dimensional array of parahuman defenders, coordinating the parahumans with great acuity and overwhelming the alien entity. Gold Morning ends with humanity victorious, but the surgery causes Taylor's power to consume her mind and render her insane, prompting her to flee from her allies and into the expanse of the multiverse before being accosted by Contessa, another powerful precognitive parahuman. Contessa is shown to be the final agent and architect of the interdimensional conspiracy known as Cauldron, suggested to be responsible for many of the previous events of the story with the aim of combating Scion. Contessa then shoots the ailing Taylor twice in the head, seemingly euthanizing the younger woman and neutralizing whatever threat Khepri would later pose. In a series of epilogues, the short-term fate of the remaining Undersiders and surviving heroes following Gold Morning is addressed.
Unbeknownst to all her remaining friends and former allies, Taylor survived and is living in exile alongside her father on Earth Aleph , an alternate dimension that had been sealed off from the rest of the multiverse upon Scion's defeat. Contessa's gunshots appears to have restored Taylor's sanity, but seemingly removed her powers as well. With her short and frenetic life as a cape having ended, Taylor intends to settle into a quiet existence with the anonymity afforded to her on Earth Aleph, and later encounters and befriends an alternate version of her late mother.
Hal Wierzbicki of entertainment site C0ws observed that
If I had to identify a theme running through all of Worm, it’s Taylor wanting to make the world a better place, a safer place, for herself, her family and her friends. If I had to pick a second theme, it would be that those good intentions aren’t enough. Taylor seems to make the best decision at any possible moment, the decision that gets her out of a losing fight, the decision that saves the lives of her friends, the decision that wins a battle. Yet, in doing so, things just get worse. [ 11 ]
Gavin Scott Williams suggested that the story contains an "undercurrent" of the idea that "sometimes you have to go outside the rules to do the right thing". [ 15 ] Several authors have compared the story to Alan Moore 's Watchmen , [ 16 ] [ 17 ] as well as the character of Spider-Man and his themes of responsibility, [ 16 ] although McCrae has stated in interviews that no one author has heavily influenced him. [ 4 ]
The title Worm has multiple potential meanings. It has been connected to the protagonist's character development, as a "lowly, overlooked" person who is nonetheless useful and dangerous; drawing a parallel with the protagonist's power to control worms and other bugs . [ 15 ] The arc titles also generally have double meanings. [ 18 ]
Several reviewers have described the serial as an exercise in repeatedly escalating the stakes of the story. [ 16 ] [ 19 ]
A number of reviewers have noted the characters' ingenuity, and the original and creative use of superpowers in the narrative. [ 11 ] [ 19 ] Author Adam Sherman described one of the recurring themes of the story as "that powers don’t really make the person, it's the person who makes the power". McCrae has described how he would regularly write himself into corners, so that "the desperate gambits we see are echoed by my writerly desperation to figure out a way to keep things going." [ 4 ] G.S Williams drew a parallel between the protagonist's power being seemingly underwhelming, and her being overlooked in her civilian life, and the broader theme of things being overlooked. [ 15 ]
Worm has received almost entirely favorable reviews. [ 18 ] [ 20 ] It received substantial attention following a favorable review by author Gavin Scott Williams roughly six months into publication, who praised the story's themes and originality. [ 7 ] [ 15 ] Readership doubled when it was recommended by author Eliezer Yudkowsky on his website while the story was in its final months. [ 4 ]
Critics favorably compared it to the similar-length book series A Song of Ice and Fire . [ 1 ] [ 17 ] Matt Freeman of Doof Media praised the story's originality, noting that it works as a science fiction story to a degree not found in most works of superhero fiction. [ 16 ] Media site Toolsandtoys.net published a review by Chris Gonzales, who described it as "one of my favorite stories ever written". However, he also noted that it was "dark", warning "definitely don’t hand this to a kid to read". [ 5 ] Chris Ellis of Ergohacks.com noted that the story "managed to hit every single trigger warning we have listed", but called it "among the best books and universes I’ve ever read." [ 21 ]
Reviewers have praised the story's realism and use of consequences, contrasting it favorably with the tendency for characters to miraculously return from the dead in superhero comic books and films . [ 6 ] [ 16 ] Many praised the story's originality and creative use of superpowers . [ 11 ] Several reviewers commended the detail, consistency, and depth of the setting. [ 22 ] [ 23 ]
Several reviews praised the story as being highly addictive. [ 1 ] [ 16 ]
The story also possesses a sizable online fanbase. [ 7 ] Fans of the story have collaborated to create a complete audio book, as well as other projects, such as the We've Got Worm podcast, a weekly arc-by-arc podcast with a first-time reader and a Worm expert. [ 24 ] [ 25 ] Fan art relating to the novel has been published on DeviantArt , as well as a large amount of fan fiction . [ 1 ] [ 10 ] There is an IRC chatroom established for readers to comment and discuss the story, which is constantly active, as well as communities of fans on a number of online forums. [ 7 ] Worm , along with McCrae's other completed works Pact , Twig , and the Worm sequel Ward , are consistently among the highest-rated works on ratings site TopWebFiction, [ 1 ] and Worm is the highest-rated work on several websites that collect serial fiction. [ 10 ] [ 12 ] Worm has an average rating of 4.61 out of 5 stars on Goodreads, with over 8000 user rankings. [ 26 ]
Several publications have discussed Worm within the context of the increasing popularity of web serials , [ 2 ] [ 11 ] [ 16 ] and compared to the work of authors such as Charles Dickens and Mark Twain , who also wrote in the serial format. [ 2 ] [ 16 ] Authors Olivia Rising and Adam Sherman have credited it as a decisive influence on their work. [ 9 ] [ 27 ]
A number of companies have approached McCrae to discuss adapting Worm , as well as another of his serials, Twig . However, McCrae takes a pessimistic view of whether it will be successfully adapted. [ 18 ]
The story received additional media attention in 2025, during investigations into the Zizians , following a spree of killings and arrests. The group's founder, Ziz LaSota, had named herself for the Simurgh, the Endbringer who turns people who spend time around her dangerously violent. Worm constitutes a common literary reference point in the rationalist community . [ 28 ]
In October 2017, McCrae announced on his blog that a sequel to Worm would be released. [ 29 ] The interim story arc, Glow-worm , was released beginning October 21, 2017, [ 30 ] and the sequel, Ward , featuring a new protagonist, began serialization on November 11, 2017, [ 31 ] and was concluded on May 2, 2020. [ 32 ]
|
https://en.wikipedia.org/wiki/Worm_(web_serial)
|
A wormhole is a hypothetical structure that connects disparate points in spacetime . It may be visualized as a tunnel with two ends at separate points in spacetime (i.e., different locations, different points in time, or both). Wormholes are based on a special solution of the Einstein field equations . [ 1 ] More precisely they are a transcendental bijection of the spacetime continuum, an asymptotic projection of the Calabi–Yau manifold manifesting itself in anti-de Sitter space . [ 2 ]
Wormholes are consistent with the general theory of relativity , but whether they actually exist is unknown. Many physicists postulate that wormholes are merely projections of a fourth spatial dimension , analogous to how a two-dimensional (2D) being could experience only part of a three-dimensional (3D) object. [ 3 ] A well-known analogy of such constructs is provided by the Klein bottle , displaying a hole when rendered in three dimensions but not in four or higher dimensions.
In 1995, Matt Visser suggested there may be many wormholes in the universe if cosmic strings with negative mass were generated in the early universe . [ 4 ] [ 5 ] Some physicists, such as Kip Thorne , have suggested how to make wormholes artificially. [ 6 ]
For a simplified notion of a wormhole, space can be visualized as a two-dimensional surface. In this case, a wormhole would appear as a hole in that surface, lead into a 3D tube (the inside surface of a cylinder ), then re-emerge at another location on the 2D surface with a hole similar to the entrance. An actual wormhole would be analogous to this, but with the spatial dimensions raised by one. For example, instead of circular holes on a 2D plane , the entry and exit points could be visualized as spherical holes in 3D space leading into a four-dimensional "tube" similar to a spherinder . [ citation needed ]
Another way to imagine wormholes is to take a sheet of paper and draw two somewhat distant points on one side of the paper. The sheet of paper represents a plane in the spacetime continuum , and the two points represent a distance to be traveled, but theoretically, a wormhole could connect these two points by folding that plane ( i.e. the paper) so the points are touching. In this way, it would be much easier to traverse the distance since the two points are now touching. [ citation needed ]
In 1928, German mathematician, philosopher and theoretical physicist Hermann Weyl proposed a wormhole hypothesis of matter in connection with mass analysis of electromagnetic field energy; [ 7 ] [ 8 ] however, he did not use the term "wormhole" (he spoke of "one-dimensional tubes" instead). [ 9 ]
American theoretical physicist John Archibald Wheeler (inspired by Weyl's work) [ 9 ] coined the term "wormhole". [ 10 ] [ 11 ] [ 12 ] In a 1957 paper that he wrote with Charles W. Misner , they write: [ 13 ]
This analysis forces one to consider situations ... where there is a net flux of lines of force, through what topologists would call "a handle " of the multiply-connected space, and what physicists might perhaps be excused for more vividly terming a "wormhole".
Wormholes have been defined both geometrically and topologically . [ further explanation needed ] From a topological point of view, an intra-universe wormhole (a wormhole between two points in the same universe) is a compact region of spacetime whose boundary is topologically trivial, but whose interior is not simply connected . Formalizing this idea leads to definitions such as the following, taken from Matt Visser's Lorentzian Wormholes (1996). [ 14 ] [ page needed ]
If a Minkowski spacetime contains a compact region Ω, and if the topology of Ω is of the form Ω ~ S × Σ, where Σ is a three-manifold of the nontrivial topology, whose boundary has the topology of the form ∂Σ ~ S 2 , and if, furthermore, the hypersurfaces Σ are all spacelike, then the region Ω contains a quasi-permanent intrauniverse wormhole.
Geometrically, wormholes can be described as regions of spacetime that constrain the incremental deformation of closed surfaces. For example, in Enrico Rodrigo's The Physics of Stargates, a wormhole is defined informally as:
a region of spacetime containing a " world tube " (the time evolution of a closed surface) that cannot be continuously deformed (shrunk) to a world line (the time evolution of a point or observer).
The first type of wormhole solution discovered was the Schwarzschild wormhole, which would be present in the Schwarzschild metric describing an eternal black hole , but it was found that it would collapse too quickly for anything to cross from one end to the other. Wormholes that could be crossed in both directions, known as traversable wormholes , were thought to be possible only if exotic matter with negative energy density could be used to stabilize them. [ 15 ] Later, physicists reported that microscopic traversable wormholes may be possible and not require any exotic matter, instead requiring only electrically charged fermionic matter with small enough mass that it cannot collapse into a charged black hole . [ 16 ] [ 17 ] [ 18 ] While such wormholes, if possible, may be limited to transfers of information, humanly traversable wormholes may exist if reality can broadly be described by the Randall–Sundrum model 2 , a brane -based theory consistent with string theory . [ 19 ] [ 20 ]
Einstein–Rosen bridges (or ER bridges ), [ 21 ] named after Albert Einstein and Nathan Rosen , [ 22 ] are connections between areas of space that can be modeled as vacuum solutions to the Einstein field equations , and that are now understood to be intrinsic parts of the maximally extended version of the Schwarzschild metric describing an eternal black hole with no charge and no rotation. Here, "maximally extended" refers to the idea that the spacetime should not have any "edges": it should be possible to continue this path arbitrarily far into the particle's future or past for any possible trajectory of a free-falling particle (following a geodesic in the spacetime).
In order to satisfy this requirement, it turns out that in addition to the black hole interior region that particles enter when they fall through the event horizon from the outside, there must be a separate white hole interior region that allows us to extrapolate the trajectories of particles that an outside observer sees rising up away from the event horizon. [ 23 ] And just as there are two separate interior regions of the maximally extended spacetime, there are also two separate exterior regions, sometimes called two different "universes", with the second universe allowing us to extrapolate some possible particle trajectories in the two interior regions. This means that the interior black hole region can contain a mix of particles that fell in from either universe (and thus an observer who fell in from one universe might be able to see the light that fell in from the other one), and likewise particles from the interior white hole region can escape into either universe. All four regions can be seen in a spacetime diagram that uses Kruskal–Szekeres coordinates .
In this spacetime, it is possible to come up with coordinate systems such that if a hypersurface of constant time (a set of points that all have the same time coordinate, such that every point on the surface has a space-like separation, giving what is called a 'space-like surface') is picked and an "embedding diagram" drawn depicting the curvature of space at that time, the embedding diagram will look like a tube connecting the two exterior regions, known as an "Einstein–Rosen bridge". The Schwarzschild metric describes an idealized black hole that exists eternally from the perspective of external observers; a more realistic black hole that forms at some particular time from a collapsing star would require a different metric. When the infalling stellar matter is added to a diagram of a black hole's geography, it removes the part of the diagram corresponding to the white hole interior region, along with the part of the diagram corresponding to the other universe. [ 24 ]
The Einstein–Rosen bridge was discovered by Ludwig Flamm in 1916, [ 25 ] a few months after Schwarzschild published his solution, and was rediscovered by Albert Einstein and his colleague Nathan Rosen, who published their result in 1935. [ 22 ] [ 26 ] In 1962, John Archibald Wheeler and Robert W. Fuller published a paper [ 27 ] showing that this type of wormhole is unstable if it connects two parts of the same universe, and that it will pinch off too quickly for light (or any particle moving slower than light) that falls in from one exterior region to make it to the other exterior region.
According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular Schwarzschild black hole. In the Einstein–Cartan –Sciama–Kibble theory of gravity, however, it forms a regular Einstein–Rosen bridge. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor , as a dynamic variable. Torsion naturally accounts for the quantum-mechanical, intrinsic angular momentum ( spin ) of matter. The minimal coupling between torsion and Dirac spinors generates a repulsive spin–spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction prevents the formation of a gravitational singularity (e.g. a black hole). Instead, the collapsing matter reaches an enormous but finite density and rebounds, forming the other side of the bridge. [ 28 ]
Although Schwarzschild wormholes are not traversable in both directions, their existence inspired Kip Thorne to imagine traversable wormholes created by holding the "throat" of a Schwarzschild wormhole open with exotic matter (material that has negative mass/energy). [ 29 ]
Other non-traversable wormholes include Lorentzian wormholes (first proposed by John Archibald Wheeler in 1957), wormholes creating a spacetime foam in a general relativistic spacetime manifold depicted by a Lorentzian manifold , [ 30 ] and Euclidean wormholes (named after Euclidean manifold , a structure of Riemannian manifold ). [ 31 ]
The Casimir effect shows that quantum field theory allows the energy density in certain regions of space to be negative relative to the ordinary matter vacuum energy , and it has been shown theoretically that quantum field theory allows states where energy can be arbitrarily negative at a given point. [ 32 ] Many physicists, such as Stephen Hawking , [ 33 ] Kip Thorne , [ 34 ] and others, [ 35 ] [ 36 ] [ 37 ] argued that such effects might make it possible to stabilize a traversable wormhole. [ 38 ] The only known natural process that is theoretically predicted to form a wormhole in the context of general relativity and quantum mechanics was put forth by Juan Maldacena and Leonard Susskind in their ER = EPR conjecture. The quantum foam hypothesis is sometimes used to suggest that tiny wormholes might appear and disappear spontaneously at the Planck scale , [ 39 ] : 494–496 [ 40 ] and stable versions of such wormholes have been suggested as dark matter candidates. [ 41 ] [ 42 ] It has also been proposed that, if a tiny wormhole held open by a negative mass cosmic string had appeared around the time of the Big Bang , it could have been inflated to macroscopic size by cosmic inflation . [ 43 ]
Lorentzian traversable wormholes would allow travel in both directions from one part of the universe to another part of that same universe very quickly or would allow travel from one universe to another. The possibility of traversable wormholes in general relativity was first demonstrated in a 1973 paper by Homer Ellis [ 44 ] and independently in a 1973 paper by K. A. Bronnikov. [ 45 ] Ellis analyzed the topology and the geodesics of the Ellis drainhole , showing it to be geodesically complete, horizonless, singularity-free, and fully traversable in both directions. The drainhole is a solution manifold of Einstein's field equations for a vacuum spacetime, modified by inclusion of a scalar field minimally coupled to the Ricci tensor with antiorthodox polarity (negative instead of positive). (Ellis specifically rejected referring to the scalar field as 'exotic' because of the antiorthodox coupling, finding arguments for doing so unpersuasive.) The solution depends on two parameters: m , which fixes the strength of its gravitational field, and n , which determines the curvature of its spatial cross sections. When m is set equal to 0, the drainhole's gravitational field vanishes. What is left is the Ellis wormhole , a nongravitating, purely geometric, traversable wormhole.
Kip Thorne and his graduate student Mike Morris independently discovered in 1988 the Ellis wormhole and argued for its use as a tool for teaching general relativity. [ 46 ] For this reason, the type of traversable wormhole they proposed, held open by a spherical shell of exotic matter , is also known as a Morris–Thorne wormhole .
Later, other types of traversable wormholes were discovered as allowable solutions to the equations of general relativity, including a variety analyzed in a 1989 paper by Matt Visser, in which a path through the wormhole can be made where the traversing path does not pass through a region of exotic matter. In the pure Gauss–Bonnet gravity (a modification to general relativity involving extra spatial dimensions that is sometimes studied in the context of brane cosmology ), however, exotic matter is not needed in order for wormholes to exist—they can exist even with no matter. [ 47 ] A type held open by negative mass cosmic strings was put forth by Visser in collaboration with Cramer et al. , [ 43 ] in which it was proposed that such wormholes could have been naturally created in the early universe.
Wormholes connect two points in spacetime, which means that they would in principle allow travel in time , as well as in space. In 1988, Morris, Thorne and Yurtsever worked out how to convert a wormhole traversing space into one traversing time by accelerating one of its two mouths. [ 34 ] According to general relativity, however, it would not be possible to use a wormhole to travel back to a time earlier than when the wormhole was first converted into a time "machine". Until this time it could not have been noticed or have been used. [ 39 ] : 504
To see why exotic matter is required, consider an incoming light front traveling along geodesics, which then crosses the wormhole and re-expands on the other side. The expansion goes from negative to positive. As the wormhole neck is of finite size, we would not expect caustics to develop, at least within the vicinity of the neck. According to the optical Raychaudhuri's theorem , this requires a violation of the averaged null energy condition . Quantum effects such as the Casimir effect cannot violate the averaged null energy condition in any neighborhood of space with zero curvature, [ 48 ] but calculations in semiclassical gravity suggest that quantum effects may be able to violate this condition in curved spacetime. [ 49 ] Although it was hoped recently that quantum effects could not violate an achronal version of the averaged null energy condition, [ 50 ] violations have nevertheless been found, [ 51 ] so it remains an open possibility that quantum effects might be used to support a wormhole.
In some hypotheses where general relativity is modified , it is possible to have a wormhole that does not collapse without having to resort to exotic matter. For example, this is possible with R 2 gravity, a form of f ( R ) gravity . [ 52 ]
The impossibility of faster-than-light relative speed applies only locally. Wormholes might allow effective superluminal ( faster-than-light ) travel by ensuring that the speed of light is not exceeded locally at any time. While traveling through a wormhole, subluminal (slower-than-light) speeds are used. If two points are connected by a wormhole whose length is shorter than the distance between them outside the wormhole, the time taken to traverse it could be less than the time it would take a light beam to make the journey if it took a path through the space outside the wormhole. A light beam traveling through the same wormhole would still beat the traveler.
If traversable wormholes exist, they might allow time travel . [ 34 ] A proposed time-travel machine using a traversable wormhole might hypothetically work in the following way: One end of the wormhole is accelerated to some significant fraction of the speed of light, perhaps with some advanced propulsion system , and then brought back to the point of origin. Alternatively, another way is to take one entrance of the wormhole and move it to within the gravitational field of an object that has higher gravity than the other entrance, and then return it to a position near the other entrance. For both these methods, time dilation causes the end of the wormhole that has been moved to have aged less, or become "younger", than the stationary end as seen by an external observer; time connects differently through the wormhole than outside it, however, so that synchronized clocks at either end of the wormhole will always remain synchronized as seen by an observer passing through the wormhole, no matter how the two ends move around. [ 39 ] : 502 This means that an observer entering the "younger" end would exit the "older" end at a time when it was the same age as the "younger" end, effectively going back in time as seen by an observer from the outside. One significant limitation of such a time machine is that it is only possible to go as far back in time as the initial creation of the machine; [ 39 ] : 503 it is more of a path through time rather than it is a device that itself moves through time, and it would not allow the technology itself to be moved backward in time. [ 53 ] [ 54 ]
According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as " exotic matter ". More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions , such as the null energy condition along with the weak, strong, and dominant energy conditions. It is known that quantum effects can lead to small measurable violations of the null energy condition, [ 14 ] : 101 and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics. [ 55 ] Although early calculations suggested a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small. [ 56 ]
In 1993, Matt Visser argued that the two mouths of a wormhole with such an induced clock difference could not be brought together without inducing quantum field and gravitational effects that would either make the wormhole collapse or the two mouths repel each other, [ 57 ] or otherwise prevent information from passing through the wormhole. [ 58 ] Because of this, the two mouths could not be brought close enough for causality violation to take place. In a 1997 paper, however, Visser hypothesized that a complex " Roman ring " (named after Tom Roman) configuration of an N number of wormholes arranged in a symmetric polygon could still act as a time machine, although he concludes that this is more likely a flaw in classical quantum gravity theory rather than proof that causality violation is possible. [ 59 ]
A possible resolution to the paradoxes resulting from wormhole-enabled time travel rests on the many-worlds interpretation of quantum mechanics .
In 1991 David Deutsch showed that quantum theory is fully consistent (in the sense that the so-called density matrix can be made free of discontinuities) in spacetimes with closed timelike curves. [ 60 ] Later, it was shown that such a model of closed timelike curves can have internal inconsistencies as it will lead to strange phenomena like distinguishing non-orthogonal quantum states and distinguishing proper and improper mixture. [ 61 ] [ 62 ] Accordingly, the destructive positive feedback loop of virtual particles circulating through a wormhole time machine, a result indicated by semi-classical calculations, is averted. A particle returning from the future does not return to its universe of origination but to a parallel universe. This suggests that a wormhole time machine with an exceedingly short time jump is a theoretical bridge between contemporaneous parallel universes. [ 15 ]
Because a wormhole time-machine introduces a type of nonlinearity into quantum theory, this sort of communication between parallel universes is consistent with Joseph Polchinski 's proposal of an Everett phone [ 63 ] (named after Hugh Everett ) in Steven Weinberg 's formulation of nonlinear quantum mechanics. [ 64 ]
The possibility of communication between parallel universes has been dubbed interuniversal travel . [ 65 ]
Wormhole can also be depicted in a Penrose diagram of a Schwarzschild black hole . In the Penrose diagram, an object traveling faster than light will cross the black hole and will emerge from another end into a different space, time or universe. This will be an inter-universal wormhole.
Theories of wormhole metrics describe the spacetime geometry of a wormhole and serve as theoretical models for time travel. An example of a (traversable) wormhole metric is the following: [ 66 ]
first presented by Ellis (see Ellis wormhole ) as a special case of the Ellis drainhole .
One type of non-traversable wormhole metric is the Schwarzschild solution (see the first diagram):
The original Einstein–Rosen bridge was described in an article published in July 1935. [ 67 ] [ 68 ]
For the Schwarzschild spherically symmetric static solution
where d s {\displaystyle ds} is the proper time and c = 1 {\displaystyle c=1} .
If one replaces r {\displaystyle r} with u {\displaystyle u} according to u 2 = r − 2 m {\displaystyle u^{2}=r-2m}
The four-dimensional space is described mathematically by two congruent parts or "sheets", corresponding to u > 0 {\displaystyle u>0} and u < 0 {\displaystyle u<0} , which are joined by a hyperplane r = 2 m {\displaystyle r=2m} or u = 0 {\displaystyle u=0} in which g {\displaystyle g} vanishes. We call such a connection between the two sheets a "bridge".
For the combined field, gravity and electricity, Einstein and Rosen derived the following Schwarzschild static spherically symmetric solution
where ε {\displaystyle \varepsilon } is the electric charge.
The field equations without denominators in the case when m = 0 {\displaystyle m=0} can be written
In order to eliminate singularities, if one replaces r {\displaystyle r} by u {\displaystyle u} according to the equation:
and with m = 0 {\displaystyle m=0} one obtains [ 69 ] [ 70 ]
The solution is free from singularities for all finite points in the space of the two sheets
Wormholes are a common element in science fiction because they allow interstellar, intergalactic, and sometimes even interuniversal travel within human lifetime scales. In fiction, wormholes have also served as a method for time travel .
|
https://en.wikipedia.org/wiki/Wormhole
|
Wormwood ( Ancient Greek : ἀψίνθιον ( apsinthion) αψινθος ( apsinthos ) is a prophesied star or angel [ 1 ] which appears in the Book of Revelation .
Wormwood, translated from αψινθος (Apsinthos) and לענה ( la'anah ), is historically believed to refer to a plant of the genus Artemisia , likely either A. absinthium or A. herba-alba , used metaphorically to mean something with a bitter taste. [ 2 ] The English rendering "wormwood" additionally refers to the dark green oil produced by the plant, which was used to kill intestinal worms. [ 2 ] In Revelation, it refers to the water being turned into wormwood, i.e. made bitter. [ 2 ]
The Biblical Hebrew word לענה ( la'anah ), translated into English as "wormwood", occurs nine times in the Hebrew Bible , seven times with the implication of bitterness and twice as a proper noun. [ 3 ]
The Greek word apsinthos , which is rendered with the English "wormwood", [ 4 ] is mentioned only once in the New Testament , in the Book of Revelation :
The third angel blew his trumpet, and a great star fell from heaven, blazing like a torch, and it fell on a third of the rivers and on the springs of water. The name of the star is Wormwood. A third of the waters became wormwood, and many died from the water, because it was made bitter. ( Rev 8:10–11 )
Certain commentators have held that this "great star" represents one of several important figures in political or ecclesiastical history: Matthew Henry mentions Augustulus , a 5th-century emperor of the Western Roman Empire , and Pelagius , deemed a heretic at the Council of Ephesus . [ 5 ]
Other Bible dictionaries and commentaries view the term as a reference to a celestial being; for example, A Dictionary of the Holy Bible states that "the star called Wormwood seems to denote a mighty prince, or power of the air, the instrument, in its fall". [ 6 ]
Various religious groups and figures, including Seventh-day Adventists and the theologians Matthew Henry and John Gill , [ 7 ] regard the verses of Revelation 8 as symbolic references to past events in human history. In the case of Wormwood, some historicist interpreters believe it represents the army of the Huns led by Attila , pointing to chronological consistencies between the timeline of prophecy they have accepted and the history of the Huns’ campaign in Europe . [ 8 ] Others point to the heretical priest Arius , the Roman Emperor Constantine , Origen , or the ascetic monk Pelagius , who denied the doctrine of original sin . [ 7 ]
The Swedenborgian New Church follows a spiritual interpretation of the star Wormwood based on other passages of scripture which mention gall and wormwood. The star signifies self-derived intelligence which departs from God, [ 9 ] thus it falls from heaven. For the star to make the waters of rivers and fountains bitter signifies to falsify spiritual truths, [ 10 ] as waters signify truths derived from the Word. [ 11 ] In general, the Book of Revelation is seen as a prophecy of the corruption of the Christian churches in the End Times , which is followed by the New Church signified by the New Jerusalem . [ 12 ]
A number of Bible scholars consider the term Wormwood to be a purely symbolic representation of the bitterness that will fill the earth during troubled times, noting that the plant for which Wormwood is named, Artemisia absinthium , or Mugwort , Artemisia vulgaris , is a known biblical metaphor for things that are unpalatably bitter. [ 13 ] [ 14 ] [ 15 ] [ 16 ]
Due to the Ukrainian word for Artemisia vulgaris being Chornóbyl , [ 17 ] many [ 18 ] have used the Chernobyl nuclear disaster in 1986 as definitive proof that the prophecy in the Book of Revelation is correct. [ 19 ] The verses referring to a "star falling down and turning the waters bitter" are interpreted as the radioactive fallout from the disaster poisoning the environment around Chernobyl , leaving it uninhabitable. [ 20 ]
In the town centre of Chornóbyl, there is the Wormwood Star Memorial, which depicts an angel blowing a trumpet, recalling the biblical prophecy. [ 21 ]
|
https://en.wikipedia.org/wiki/Wormwood_(Bible)
|
A Woronin body (named after the Russian botanist Mikhail Stepanovich Woronin [ 2 ] ) is an organelle found near the septae that divide hyphal compartments in filamentous Ascomycota . It is formed by budding from conventional peroxisomes. [ 3 ] Woronin bodies are present in the fungal class Pezizomycotina, which includes species such as Neurospora crassa, Aspergillus fumigatus, and various plant pathogenic fungi, like Zymoseptoria tritici . [ 4 ]
Transmission electron microscopy (TEM) reveals Woronin bodies as structures with a dense, proteinaceous core surrounded by a tightly bound unit membrane. The membrane-bound structure contains a dense core made of a protein called HEX-1, which self-assembles into a hexagonal crystal and forms a 3D protein lattice. The size of Woronin bodies range from 100 nm to over 1 μm, consistently exceeding the diameter of the septal pore.
In most species, Woronin bodies are positioned on both sides of the septum and are connected to the pore via a mesh-like tether. Evidence for this tether was strengthened by laser tweezer experiments, which demonstrated that Woronin bodies, when displaced from the septum, return to their original position upon release. [ 5 ]
One established function of Woronin bodies is the plugging of the septal pores after hyphal wounding, which restricts the loss of cytoplasm to the sites of injury. [ 7 ] This plug is reinforced as new material is deposited over the septal plate and on the cytoplasmic side of the Woronin body, consolidating it into a permanent seal. The plugging process occurs rapidly within the mycelium near the site of significant damage. [ 8 ]
Woronin bodies can also regulate pore opening and closure, which aids in the control of hyphal heterogeneity . This dynamic function enables the fungus to adapt to changing environmental conditions while maintaining cellular homeostasis by selectively regulating the flow of materials between hyphal compartments. [ 9 ]
Although Woronin bodies were discovered over 140 years ago, understanding of their biogenesis (the process of formation and development) [ 10 ] remains incomplete.
Woronin Body biogenesis occurs in the growing apical hyphal compartment, a process determined in part by apically biased hex-1 gene expression. [ 11 ] Woronin body formation starts near Glyoxysomes and may occur through fission from them. [ 12 ]
Three genetic loci are specifically required for the biogenesis of Woronin bodies. These loci encode the core matrix protein HEX-1, its membrane receptor WSC (Woronin sorting complex) , and the cytoplasmic tether called Leashin. Woronin bodies are formed in the growing tip of the hyphae, where the hex gene is more actively expressed. Newly made HEX-1 is directed into the peroxisome matrix using a C-terminal peroxisomal-targeting signal , where it self-assembles into large protein clusters. [ 13 ] These clusters are wrapped by WSC, a membrane protein with four transmembrane regions, creating budding structures.
HEX-1 and WSC work together to shape and position Woronin bodies. HEX-1 attracts WSC to newly forming Woronin bodies, while WSC helps anchor them to the cell cortex. This anchoring depends on WSC's level in the membrane and its ability to recruit the cytoplasmic protein Leashin, which secures the Woronin bodies at the cortex. Leashin proteins in filamentous fungi are unusually large proteins that tether the Woronin body. It has highly conserved N- and C-terminal regions, and a nonconserved middle region of approximately 2,500 amino acids which contain repetitive sequences that vary between species. [ 15 ] These sequences likely determine the distance between the Woronin body and the septal pore (100–200 nm) and provide elasticity to the tether.
These WSC, HEX-1, and Leashin proteins work together to ensure that each compartment of the fungal hypha contains tethered, immobilized Woronin bodies, ready to respond to cellular damage.
If WSC or Leashin is absent, Woronin body development stops, and HEX-1 proteins remain trapped in peroxisomes within the growing hyphal tips. Once the Woronin body is anchored to the cell cortex, PEX-11 , a conserved peroxisomal membrane protein, facilitates the separation of the Woronin body from its parent peroxisome. [ 16 ]
The hex-1 gene encodes HEX-1, the major protein first identified as the main component of Woronin bodies. Peroxisomal HEX-1 forms small, crystalline, membrane-bound, hexagonal protein granules that aggregate to maintain the structural integrity of Woronin bodies. [ 12 ]
The gene encoding the HEX-1 protein has conserved homologs found in several Pezizomycotina species, such as Neurospora crassa . [ 17 ] The hex-1 gene features a conserved intron near their N-terminus, and is believed to have originated from the duplication of the ancestral gene encoding the eukaryotic initiation factor 5A (eIF-5A). Following this duplication, hex-1 evolved a distinct function by acquiring amino acids necessary for peroxisomal targeting and self-assembly. In several fungi, deletion of hex-1 leads to excessive hyphal bleeding after wounding, along with pleiotropic effects on phenotypes related to asexual reproduction, vegetative growth, and stress responses to osmotic and cell wall-perturbing agents. [ 8 ] In Neurospora crassa , two forms of the hex-1 gene are activated in more alkaline and low phosphate environments. Studies show that the PacC protein in N. crassa upregulates hex-1 transcription at basic pH (~pH 7.8), influencing the formation of Woronin bodies. [ 18 ]
Confocal microscopy is a type of standard fluorescence microscopy that employs specific optical components to produce live, detailed visualization of Woronin bodies stained with fluorescent probes. [ 19 ] Dual fluorescence labeling is frequently used, with GFP -tagged RNase T1 marking the septa and DsRed2 -tagged HEX-1 protein labeling the Woronin bodies. Woronin bodies appear red in the final 3-D reconstruction image, while septal pores appear as dark regions surrounded by green fluorescence. Under hypotonic shock, red fluorescent Woronin bodies cluster at septal pores adjacent to lysed compartments, plugging them and preventing cytoplasmic leakage. [ 20 ] Confocal microscopy is significantly advantageous to scientific research on Woronin bodies as it allows researchers to watch the interaction of Woronin bodies and fungal septa under stress conditions in real time.
Scanning electron microscopy (SEM) employs a focused beam of electrons to scan the surface of a fungal specimen, producing images with much higher resolution than optical microscopy . [ 21 ]
SEM is regarded as a promising tool for analyzing fungal hyphae, allowing researchers to study cell physiology and how organelles respond to different conditions. For SEM experiments, 48-hour cultures of fungal species are fixed and dehydrated with ethanol. The samples are dried in a desiccator before being placed on a stub covered in carbon conductive tape, sputter-coated with gold, and examined under a scanning electron microscope. [ 22 ] The result of this SEM imaging is detailed 3D images of Woronin bodies.
Electron spectroscopic imaging (ESI) creates images from fungal tissue sections by filtering and transmitting electrons that are inelastically scattered. [ 23 ]
Small sections (30-50 nm thick) of fungal tissue are fixed and stained, then examined using a transmission electron microscope equipped with an electron energy loss spectroscopic imaging (EELS) system. Elemental distribution maps are created by measuring energy intensity at specific absorption edges. ESI enables high-resolution imaging of Woronin bodies and surrounding structures, providing detailed structural visualization and analysis of elemental composition. [ 24 ]
|
https://en.wikipedia.org/wiki/Woronin_body
|
The worship of heavenly bodies is the veneration of stars (individually or together as the night sky), the planets, or other astronomical objects as deities , or the association of deities with heavenly bodies. In anthropological literature these systems of practice may be referred to as astral cults .
The most notable instances of this are Sun gods and Moon gods in polytheistic systems worldwide. Also notable are the associations of the planets with deities in Sumerian religion , and hence in Babylonian and Greco - Roman religion, viz. Mercury , Venus , Mars , Jupiter , and Saturn . Gods, goddesses, and demons may also be considered personifications of astronomical phenomena such as lunar eclipses, planetary alignments, and apparent interactions of planetary bodies with stars. The Sabians of Harran , a poorly understood pagan religion that existed in Harran during the early Islamic period (7th–10th century), were known for their astral cult.
The related term astrolatry usually implies polytheism . Some Abrahamic religions prohibit astrolatry as idolatrous . Pole star worship was also banned by imperial decree in Heian period Japan.
Astrolatry has the suffix -λάτρης, itself related to λάτρις latris 'worshipper' or λατρεύειν latreuein 'to worship' from λάτρον latron 'payment'.
Mesopotamia is worldwide the place of the earliest known astronomer and poet by name: Enheduanna , Akkadian high priestess to the lunar deity Nanna/Sin and princess, daughter of Sargon the Great ( c. 2334 – c. 2279 BCE). She had the Moon tracked in her chambers and wrote poems about her divine Moon. [ 1 ] The crescent depicting the Moon as with Enheduanna's deity Nanna/Sin have been found from the 3rd millennium BCE. [ 2 ]
Babylonian astronomy from early times associates stars with deities, but the identification of the heavens as the residence of an anthropomorphic pantheon, and later of monotheistic God and his retinue of angels, is a later development, gradually replacing the notion of the pantheon residing or convening on the summit of high mountains. Archibald Sayce (1913) argues for a parallelism of the "stellar theology" of Babylon and Egypt, both countries absorbing popular star-worship into the official pantheon of their respective state religions by identification of gods with stars or planets. [ 3 ]
The Chaldeans , who came to be seen as the prototypical astrologers and star-worshippers by the Greeks, migrated into Mesopotamia c. 940–860 BCE. [ 4 ] Astral religion does not appear to have been common in the Levant prior to the Iron Age , but becomes popular under Assyrian influence around the 7th-century BCE. [ 5 ] The Chaldeans gained ascendancy, ruling Babylonia from 608 to 557 BCE. [ 6 ] The Hebrew Bible was substantially composed during this period (roughly corresponding to the period of the Babylonian captivity ).
Astral cults were probably an early feature of religion in ancient Egypt . [ 7 ] Evidence suggests that the observation and veneration of celestial bodies played a significant role in Egyptian religious practices, even before the development of a dominant solar theology. The early Egyptians associated celestial phenomena with divine forces, seeing the stars and planets as embodiments of gods who influenced both the heavens and the earth. [ 8 ] Direct evidence for astral cults, seen alongside the dominant solar theology which arose before the Fifth Dynasty , is found in the Pyramid Texts. [ 9 ] These texts, among the oldest religious writings in the world, contain hymns and spells that not only emphasize the importance of the Sun God Ra but also refer to stars and constellations as powerful deities that guide and protect the deceased Pharaoh in the afterlife. [ 10 ]
The growth of Osiris devotion led to stars being called "followers" of Osiris. [ 11 ] They recognized five planets as "stars that know no rest" , interpreted as gods who sailed across the sky in barques : Sebegu (perhaps a form of Set ), Venus ("the one who crosses"), Mars (" Horus of the horizon"), Jupiter ("Horus who limits the two lands"), and Saturn ("Horus bull of the heavens.") [ 11 ]
One of the most significant celestial deities in ancient Egyptian religion was the goddess Sopdet , identified with the star Sirius . [ 12 ] Sopdet's rising coincided with the annual flooding of the Nile, a crucial event that sustained Egyptian agriculture. The goddess was venerated as a harbinger of the inundation, marking the beginning of a new agricultural cycle and symbolizing fertility and renewal. This connection between Sopdet and the Nile flood underscores the profound link between celestial phenomena and earthly prosperity in ancient Egyptian culture. She was known to the Greeks as Sothis . The significance of Sirius in Egyptian religion is further highlighted by its association with the goddess Isis during later periods, particularly in the Ptolemaic era, where Isis was often depicted as the star itself. [ 13 ]
Sopdet is the consort of Sah , the personified constellation of Orion near Sirius. Their child Venus [ 14 ] was the hawk god Sopdu , [ 15 ] "Lord of the East". [ 16 ] As the bringer of the New Year and the Nile flood, she was associated with Osiris from an early date [ 15 ] and by the Ptolemaic period Sah and Sopdet almost solely appeared in forms conflated with Osiris [ 17 ] and Isis . [ 18 ] Additionally, the alignment of architectural structures, such as pyramids and temples, with astronomical events reveals the deliberate integration of cosmological concepts into Egypt's built environment. [ 19 ] For example, the Great Pyramid of Giza is aligned with the cardinal points, and its descending passage is aligned with the star Thuban in the constellation Draco, which was the pole star at the time. This alignment likely served both symbolic and practical purposes, connecting the Pharaoh's eternal journey with the stars. [ 20 ]
Among the various religious groups which in the 9th and 10th centuries CE came to be identified with the mysterious Sabians mentioned in the Quran (sometimes also spelled 'Sabaeans' or 'Sabeans', but not to be confused with the Sabaeans of South Arabia ), [ 21 ] at least two groups appear to have engaged in some kind of star worship.
By far the most famous of these two are the Sabians of Harran , adherents of a Hellenized Semitic pagan religion that had managed to survive during the early Islamic period in the Upper Mesopotamian city of Harran . [ 22 ] They were described by Syriac Christian heresiographers as star worshippers. [ 23 ] Most of the scholars and courtiers working for the Abbasid and Buyid dynasties in Baghdad during the ninth–eleventh centuries who were known as 'Sabians' were either members of this Harranian religion or descendants of such members, most notably the Harranian astronomers and mathematicians Thabit ibn Qurra (died 901) and al-Battani (died 929). [ 24 ] There has been some speculation on whether these Sabian families in Baghdad, on whom most of our information about the Harranian Sabians indirectly depends, may have practiced a different, more philosophically inspired variant of the original Harranian religion. [ 25 ] However, apart from the fact that it contains traces of Babylonian and Hellenistic religion , and that an important place was taken by planets (to whom ritual sacrifices were made), little is known about Harranian Sabianism. [ 26 ] They have been variously described by scholars as (neo)- Platonists , Hermeticists , or Gnostics , but there is no firm evidence for any of these identifications. [ 27 ] [ a ]
Apart from the Sabians of Harran, there were also various religious groups living in the Mesopotamian Marshes who were called the 'Sabians of the Marshes' (Arabic: Ṣābiʾat al-baṭāʾiḥ ). [ 28 ] Though this name has often been understood as a reference to the Mandaeans , there was in fact at least one other religious group living in the marshlands of Southern Iraq. [ 29 ] This group still held on to a pagan belief related to Babylonian religion , in which Mesopotamian gods had already been venerated in the form of planets and stars since antiquity. [ 30 ] According to Ibn al-Nadim , our only source for these star-worshipping 'Sabians of the Marshes', they "follow the doctrines of the ancient Aramaeans [ ʿalā maḏāhib an-Nabaṭ al-qadīm ] and venerate the stars". [ 31 ] However, there is also a large corpus of texts by Ibn Wahshiyya (died c. 930), most famously his Nabataean Agriculture , which describes at length the customs and beliefs — many of them going back to Mespotamian models — of Iraqi Sabians living in the Sawād . [ 32 ]
Heaven worship is a Chinese religious belief that predates Taoism and Confucianism , but was later incorporated into both. Shangdi is the supreme unknowable god of Chinese folk religion . Over time, namely following the conquests of the Zhou dynasty who worshipped Tian (天 lit. "sky" ), Shangdi became synonymous with Tian, or Heaven. During the Zhou dynasty, Tian not only became synonymous with the physical sky but also embodied the divine will, representing the moral order of the universe. This evolution marked a shift from the earlier concept of Shangdi to a more abstract and universal principle that guided both natural and human affairs. [ 33 ] In the Han dynasty the worship of Heaven would be highly ritualistic and require that the emperor hold official sacrifices and worship at an altar of Heaven, the most famous of which is the Temple of Heaven in Beijing . [ 34 ] [ 35 ]
Heaven worship is closely linked with ancestor veneration and polytheism , as the ancestors and the gods are seen as a medium between Heaven and man. The Emperor of China , also known as the " Son of Heaven ", derived the Mandate of Heaven , and thus his legitimacy as ruler, from his supposed ability to commune with Heaven on behalf of his nation. [ 36 ] [ 37 ] This mandate was reinforced through celestial observations and rituals, as astrological phenomena were interpreted as omens reflecting the favor or disfavor of Heaven. The Emperor’s role was to perform the necessary rites to maintain harmony between Heaven and Earth, ensuring the prosperity of his reign. [ 38 ]
Star worship was widespread in Asia, especially in Mongolia [ 39 ] and northern China, and also spread to Korea. [ 40 ] According to Edward Schafer, star worship was already established during the Han dynasty (202 BCE–220 CE), with the Nine Imperial Gods becoming star lords. [ 41 ] The Big Dipper (Beidou) and the North Star (Polaris) were particularly significant in Chinese star worship. The Big Dipper was associated with cosmic order and governance, while the North Star was considered the throne of the celestial emperor. These stars played a crucial role in state rituals, where the Emperor’s ability to align these celestial forces with earthly governance was seen as essential to his legitimacy. [ 33 ] This star worship, along with indigenous shamanism and medical practice , formed one of the original bases of Taoism . [ 42 ] The Heavenly Sovereign was identified with the Big Dipper and the North Star . [ 43 ] Worship of Heaven in the southern suburb of the capital was initiated in 31 BCE and firmly established in the first century CE (Western Han). [ 44 ]
The Sanxing ( Chinese : 三星 ; lit. 'Three Stars') are the gods of the three stars or constellations considered essential in Chinese astrology and mythology: Jupiter, Ursa Major, and Sirius. Fu , Lu , and Shou ( traditional Chinese : 福 祿 壽 ; simplified Chinese : 福 禄 寿 ; pinyin : Fú Lù Shòu ; Cantonese Yale : Fūk Luhk Sauh ), or Cai , Zi and Shou ( 財子壽 ) are also the embodiments of Fortune ( Fu ), presiding over planet Jupiter, Prosperity (Lu), presiding over Ursa Major, and Longevity ( Shou ), presiding over Sirius. [ 45 ]
During the Tang dynasty , Chinese Buddhism adopted Taoist Big Dipper worship, borrowing various texts and rituals which were then modified to conform with Buddhist practices and doctrines. The integration of Big Dipper worship into Buddhist practices highlights the adaptability of star worship in China, where it was syncretized with various religious traditions over time. [ 33 ] The cult of the Big Dipper was eventually absorbed into the cults of various Buddhist divinities, Myōken being one of these. [ 46 ]
Star worship was also practiced in Japan. [ 47 ] [ 48 ] [ 49 ] Japanese star worship is largely based on Chinese cosmology. [ 50 ] According to Bernard Faure, "the cosmotheistic nature of esoteric Buddhism provided an easy bridge for cultural translation between Indian and Chinese cosmologies, on the one hand, and between Indian astrology and local Japanese folk beliefs about the stars, on the other". [ 50 ]
The cult of Myōken is thought to have been brought into Japan during the 7th century by immigrants ( toraijin ) from Goguryeo and Baekje . During the reign of Emperor Tenji (661–672), the toraijin were resettled in the easternmost parts of the country; as a result, Myōken worship spread throughout the eastern provinces. [ 51 ]
By the Heian period , pole star worship had become widespread enough that imperial decrees banned it for the reason that it involved "mingling of men and women", and thus caused ritual impurity. Pole star worship was also forbidden among the inhabitants of the capital and nearby areas when the imperial princess ( Saiō ) made her way to Ise to begin her service at the shrines. Nevertheless, the cult of the pole star left its mark on imperial rituals such as the emperor's enthronement and the worship of the imperial clan deity at Ise Shrine. [ 52 ] Worship of the pole star was also practiced in Onmyōdō , where it was deified as Chintaku Reifujin (鎮宅霊符神). [ 53 ]
Myōken worship was particularly prevalent among clans based in eastern Japan (the modern Kantō and Tōhoku regions), with the Kanmu Taira clan (Kanmu Heishi) and their offshoots such as the Chiba and the Sōma clans being among the deity's notable devotees. One legend claims that Taira no Masakado was a devotee of Myōken, who aided him in his military exploits. When Masakado grew proud and arrogant, the deity withdrew his favor and instead aided Masakado's uncle Yoshifumi , the ancestor of the Chiba clan. [ 54 ] Owing to his status as the Chiba clan's ujigami (guardian deity), temples and shrines dedicated to Myōken are particularly numerous in former Chiba territories. [ 55 ] Myōken worship is also prevalent in many Nichiren-shū Buddhist temples due to the clan's connections with the school's Nakayama lineage. [ 56 ]
Celestial objects hold a significant place within Indigenous American cultures. [ 57 ] [ 58 ] [ failed verification ] From the Lakota in North America to the Inca in South America, the celestial realm was integrated into daily life. Stars served as navigation aids , temporal markers, and spiritual conduits , illustrating their practical and sacred importance. [ 57 ] [ 59 ]
Heavenly bodies held spiritual wisdom. The Pleiades , revered in various cultures, symbolized diverse concepts such as agricultural cycles and ancestral spirits . [ 60 ] In North America, star worship was practiced by the Lakota people [ 61 ] [ 62 ] [ 63 ] and the Wichita people . [ 64 ] The Inca civilization engaged in star worship, [ 65 ] and associated constellations with deities and forces, while the Milky Way represented a bridge between earthly and divine realms. [ 59 ]
Indigenous American cultures encapsulate a holistic worldview that acknowledges the interplay of humanity, nature, and the cosmos. Oral traditions transmitted cosmic stories, infusing mythologies, songs, and ceremonies with cosmic significance. [ 60 ] These narratives emphasized the belief that the celestial realm offered insights into origins and purpose. [ 57 ]
The deities associated with celestial objects were largely evaluated negatively by early Christians. Gnosticism largely relies on Greek and Persian dualism, especially on Platonism . In accordance with Platonism, they regarded the idea as good while considering the material and conscious world to be inherently evil. [ 66 ] The demonized star-deities of late Persian religion became associated with a demon, thus identifying the seven observable planets with an Archon (demonic ruler). [ 66 ] These demons rule over the earth and the realm of planets, representing different desires and passions. [ 67 ] According to Origen , the Ophites depicted the world as surrounded by the demonic Leviathan . [ 67 ]
Like in Christianity, the term daimons was used for demons and refers to both the Archons as well as to their demonic assistants. Judas Iscariot is, in the Gospel of Judas , portrayed as the thirteenth daimon for betraying Jesus and a supporter of the Archons. [ 68 ]
Examples of Gnostic portrayals of demons can be found in the Apocryphon of John in which they are said to have helped to construct the physical Adam [ 69 ] and in Pistis Sophia which states they are ruled over by Hekate and punish corrupt souls. [ 70 ]
The Hebrew Bible contains repeated reference to astrolatry. Deuteronomy 4:19, 17:3 contains a stern warning against worshipping the Sun, Moon, stars or any of the heavenly host . Relapse into worshipping the host of heaven, i.e. the stars, is said to have been the cause of the fall of the kingdom of Judah in II Kings 17:16. King Josiah in 621 BCE is recorded as having abolished all kinds of idolatry in Judah, but astrolatry was continued in private (Zeph. 1:5; Jer. 8:2, 19:13). Ezekiel (8:16) describes sun-worship practised in the court of the temple of Jerusalem, and Jeremiah (44:17) says that even after the destruction of the temple, women in particular insisted on continuing their worship of the "Queen of Heaven" . [ 71 ]
Augustine of Hippo criticized sun- and star-worship in De Vera Religione (37.68) and De civitate Dei (5.1–8). Pope Leo the Great also denounced astrolatry and the cult of Sol Invictus , which he contrasted with the Christian nativity. [ citation needed ]
Jesus Christ holds a significant place in the context of Christian astrology. His birth is associated with an astronomical event, symbolized by the star of the king of the Jews. This event played a role in heralding his arrival and was considered a sign of his divine nature. The belief in Jesus as the Messiah , the anointed one, drew upon astrological concepts and symbolism. The incorporation of cosmological elements into the narrative of Jesus' life and divinity contributed to the development and interpretation of Christian theology . [ 72 ] [ verification needed ]
In The Book of Giants , one of the canonical seven treatises in Manichaeism also known from Jewish intertestamental literature , the Grigori ( egrēgoroi ) beget giant half-demon offspring with human woman. In the Middle Persian version of the Book of Giants they are referred to as kʾw , while in the Coptic Kephalaia as gigas . [ 73 ] In accordance with some interpretations of Genesis 6:1–4 , [ 74 ] the giant offspring became the ancient tyrannic rulers over mankind, until overthrown by the angels of punishment . Nonetheless, these demons are still active in the microcosm , such as Āz and Āwarzōg . [ 73 ]
Views on stars ( abāxtarān ) are thus mixed. On one hand, they are regarded as light particles of the world soul fixed in the sky. On the other hand, stars are identified with powers hindering the soul from leaving the material world. [ 73 ] The Third Messenger (Jesus) is said to have chained up demons in the sky. Their offspring, the nephilim ( nĕf īlīm ) or asrestar ( āsarēštārān ), Ašqalūn and Nebrō'ēl in particular, play instrumental roles in the creation of Adam and Eve. [ 73 ] According to Manichaeism, the watchers, known as angels in Jewish lore, are not considered angels, but demons. [ 73 ]
Worship of heavenly bodies is in Islamic tradition strongly associated with Sabians who allegedly stated that God created the planets as the rulers of this world and thus deserve worship. [ 75 ] While the planetary worship was linked to devils ( shayāṭīn ), [ 76 ] abu Ma'shar al-Balkhi reported that the planets are considered angelic spirits at the service of God. [ 77 ] He refutes the notion that astrology is based on the interference of demons or "guesswork" and established the study of the planets as a form of natural sciences. [ 78 ] By building a naturalistic connection between the planets and their earthly influence, abu Marsha saved astrology from accusations of devil-worship. [ 79 ] Such ideas were universally shared, for example, Mashallah ibn Athari denied any physical or magical influence on the world. [ 80 ]
Abu Ma‘shar further describes the planets as sentient bodies, endowed with spirit ( rūḥ ) rather than mechanical entities. [ 81 ] However, they would remain in complete obedience to God and act only with God's permission. [ 81 ] Astrology was usually considered through the lens of Hellenistic philosophy such as Neo-Platonism and Aristotelianism . As the spiritual powers allegedly emanating from the planets are explained to derive from the Anima mundi , the writings clearly distanced them from demonic entities such as jinn and devils. [ 82 ]
At a later stage, the planetary spirits have been identified with the angels and demons. The idea of seven demon-kings developed under influence of Hellenistic astrological sources. [ 83 ] In the Kitāb al-Bulhān , higher spirits ( rūḥāiya ulia ) are depicted as angels and lower spirits ( rūḥāiya sufula ) as demons. [ 84 ] However, invocation of such entities would work only by permission of God. Permission granted by the angels is supposed to be required in order to receive command over the demon or jinn. A common structure around the spiritual entities and the days of the week goes as follows:
(Arabic; Hebrew equivalent)
|
https://en.wikipedia.org/wiki/Worship_of_heavenly_bodies
|
The worst-case analysis regulation [ 1 ] was promulgated in 1979 by the US Council on Environmental Quality (CEQ). The regulation is one of many implementing the National Environmental Policy Act of 1969 [ 2 ] and it sets out the formal procedure a US government agency must follow when confronted with gaps in relevant information or scientific uncertainty about significant adverse effects on the environment from a major federal action . [ 3 ]
The regulation requires an agency to make known when it is confronted with gaps in relevant information or scientific uncertainty. [ 1 ] The agency then must determine if the missing information is essential to a reasoned choice among the alternatives. When the missing information is material to the decision an agency ordinarily must obtain the information and include it in an environmental impact statement (EIS). [ 1 ] If the means for obtaining the missing information are beyond the state of the art or alternatively if the costs of obtaining it are exorbitant the agency must then prepare a worst-case analysis . [ 1 ] In this analysis the agency must weigh the need for the action against the risks and in the face of uncertainty. The agency also is to indicate the probability or improbability of the worst case's occurrence. [ 4 ]
This article incorporates public domain material from websites or documents of the United States government .
|
https://en.wikipedia.org/wiki/Worst-case_analysis
|
Worst case analysis was, from 1978 until 1986, a doctrine under 40 U.S.C. § 1502.22 which mandated that an environmental impact statement include such an analysis: [ 1 ]
When an agency is evaluating significant adverse effects on the human environment in an environmental impact statement and there are gaps in relevant information or scientific uncertainty, the agency shall always make clear that such information is lacking or that uncertainty exists.
(a) If the information relevant to adverse impacts is essential to a reasoned choice among alternatives and is not known and the overall costs of obtaining it are not exorbitant, the agency shall include the information in the environmental impact statement.
(b) If (1) the information relevant to adverse impacts is essential to a reasoned choice among alternatives and is not known and the overall costs of obtaining it are exorbitant or (2) the information is relevant to adverse impacts, is important to the decision and the means to obtain it are not known (e.g., the means for obtaining it are beyond the state of the art) the agency shall weigh the need for the action against the risk and severity of possible adverse impacts were the action to proceed in the face of uncertainty. If the agency proceeds, it shall include a worst case analysis and an indication of the probability or improbability of its occurrence.
It led to a 1989 SCOTUS decision, written by John Paul Stevens and reported in Robertson v. Methow Valley Citizens Council , [ 2 ] after a decision by GOODWIN and FERGUSON, STEPHENS to reverse [ 3 ] the Federal District Court of Oregon ruling that the Regional Forester did not violate any laws when he issued a special use permit for a ski resort development in a roadless area in Okanogan National Forest in Washington state . [ 4 ]
The Rehnquist Court concluded
that NEPA does not require a fully developed plan detailing what steps will be taken to mitigate adverse environmental impacts and does not require a "worst case analysis." In addition, we hold that the Forest Service has adopted a permissible interpretation of its own regulations.
|
https://en.wikipedia.org/wiki/Worst_case_analysis
|
Wound contracture is a process that may occur during wound healing when an excess of wound contraction , a normal healing process, leads to physical deformity characterized by skin constriction and functional limitations. [ 1 ] [ 2 ] [ 3 ] Wound contractures may be seen after serious burns and may occur on the palms, the soles, and the anterior thorax. [ 2 ] For example, scars that prevent joints from extending or scars that cause an ectropion are considered wound contractures. [ 1 ] [ 4 ]
This article about a disease , disorder, or medical condition is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Wound_contracture
|
Wound healing refers to a living organism's replacement of destroyed or damaged tissue by newly produced tissue. [ 1 ]
In undamaged skin, the epidermis (surface, epithelial layer) and dermis (deeper, connective layer) form a protective barrier against the external environment. When the barrier is broken, a regulated sequence of biochemical events is set into motion to repair the damage. [ 1 ] [ 2 ] This process is divided into predictable phases: blood clotting ( hemostasis ), inflammation , tissue growth ( cell proliferation ), and tissue remodeling (maturation and cell differentiation ). Blood clotting may be considered to be part of the inflammation stage instead of a separate stage. [ 3 ]
The wound-healing process is not only complex but fragile, and it is susceptible to interruption or failure leading to the formation of non-healing chronic wounds . Factors that contribute to non-healing chronic wounds are diabetes , venous or arterial disease , infection, and metabolic deficiencies of old age. [ 4 ]
Wound care encourages and speeds wound healing via cleaning and protection from reinjury or infection. Depending on each patient's needs, it can range from the simplest first aid to entire nursing specialties such as wound, ostomy, and continence nursing and burn center care.
Timing is important to wound healing. Critically, the timing of wound re-epithelialization can decide the outcome of the healing. [ 11 ] If the epithelization of tissue over a denuded area is slow, a scar will form over many weeks, or months; [ 12 ] [ 13 ] If the epithelization of a wounded area is fast, the healing will result in regeneration. [ 13 ]
Wound healing is classically divided into hemostasis , inflammation, proliferation, and remodeling. Although a useful construct, this model employs considerable overlapping among individual phases. A complementary model has recently been described [ 1 ] where the many elements of wound healing are more clearly delineated. The importance of this new model becomes more apparent through its utility in the fields of regenerative medicine and tissue engineering (see Research and development section below). In this construct, the process of wound healing is divided into two major phases: the early phase and the cellular phase : [ 1 ]
The early phase, which begins immediately following skin injury, involves cascading molecular and cellular events leading to hemostasis and formation of an early, makeshift extracellular matrix that provides structural staging for cellular attachment and subsequent cellular proliferation.
The cellular phase involves several types of cells working together to mount an inflammatory response, synthesize granulation tissue, and restore the epithelial layer. [ 1 ] Subdivisions of the cellular phase are:
Just before the inflammatory phase is initiated, the clotting cascade occurs in order to achieve hemostasis , or the stopping of blood loss by way of a fibrin clot. Thereafter, various soluble factors (including chemokines and cytokines) are released to attract cells that phagocytise debris, bacteria, and damaged tissue, in addition to releasing signaling molecules that initiate the proliferative phase of wound healing.
When tissue is first wounded, blood comes in contact with collagen , triggering blood platelets to begin secreting inflammatory factors. [ 15 ] Platelets also express sticky glycoproteins on their cell membranes that allow them to aggregate , forming a mass. [ 7 ]
Fibrin and fibronectin cross-link together and form a plug that traps proteins and particles and prevents further blood loss. [ 16 ] This fibrin-fibronectin plug is also the main structural support for the wound until collagen is deposited. [ 7 ] Migratory cells use this plug as a matrix to crawl across, and platelets adhere to it and secrete factors. [ 7 ] The clot is eventually lysed and replaced with granulation tissue and then later with collagen.
Platelets, the cells present in the highest numbers shortly after a wound occurs, release mediators into the blood, including cytokines and growth factors . [ 15 ] Growth factors stimulate cells to speed their rate of division. Platelets release other proinflammatory factors like serotonin , bradykinin , prostaglandins , prostacyclins , thromboxane , and histamine , [ 3 ] which serve several purposes, including increasing cell proliferation and migration to the area and causing blood vessels to become dilated and porous . In many ways, extravasated platelets in trauma perform a similar function to tissue macrophages and mast cells exposed to microbial molecular signatures in infection: they become activated, and secrete molecular mediators – vasoactive amines, eicosanoids , and cytokines – that initiate the inflammatory process.
Immediately after a blood vessel is breached, ruptured cell membranes release inflammatory factors like thromboxanes and prostaglandins that cause the vessel to spasm to prevent blood loss and to collect inflammatory cells and factors in the area. [ 3 ] This vasoconstriction lasts five to ten minutes and is followed by vasodilation , a widening of blood vessels, which peaks at about 20 minutes post-wounding. [ 3 ] Vasodilation is the result of factors released by platelets and other cells. The main factor involved in causing vasodilation is histamine . [ 3 ] [ 15 ] Histamine also causes blood vessels to become porous, allowing the tissue to become edematous because proteins from the bloodstream leak into the extravascular space, which increases its osmolar load and draws water into the area. [ 3 ] Increased porosity of blood vessels also facilitates the entry of inflammatory cells like leukocytes into the wound site from the bloodstream . [ 17 ] [ 18 ]
Within an hour of wounding, polymorphonuclear neutrophils (PMNs) arrive at the wound site and become the predominant cells in the wound for the first two days after the injury occurs, with especially high numbers on the second day. [ 19 ] They are attracted to the site by fibronectin, growth factors, and substances such as kinins . Neutrophils phagocytise debris and kill bacteria by releasing free radicals in what is called a respiratory burst . [ 20 ] [ 21 ] They also cleanse the wound by secreting proteases that break down damaged tissue. Functional neutrophils at the wound site only have life-spans of around two days, so they usually undergo apoptosis once they have completed their tasks and are engulfed and degraded by macrophages . [ 22 ]
Other leukocytes to enter the area include helper T cells , which secrete cytokines to cause more T cells to divide and to increase inflammation and enhance vasodilation and vessel permeability. [ 17 ] [ 23 ] T cells also increase the activity of macrophages. [ 17 ]
One of the roles of macrophages is to phagocytize other expended phagocytes , [ 24 ] bacteria and damaged tissue, [ 19 ] and they also debride damaged tissue by releasing proteases. [ 25 ]
Macrophages function in regeneration [ 26 ] [ 27 ] and are essential for wound healing. [ 19 ] They are stimulated by the low oxygen content of their surroundings to produce factors that induce and speed angiogenesis [ 20 ] and they also stimulate cells that reepithelialize the wound, create granulation tissue, and lay down a new extracellular matrix . [ 28 ] By secreting these factors, macrophages contribute to pushing the wound healing process into the next phase. They replace PMNs as the predominant cells in the wound by two days after injury. [ 24 ]
The spleen contains half the body's monocytes in reserve ready to be deployed to injured tissue. [ 29 ] [ 30 ] Attracted to the wound site by growth factors released by platelets and other cells, monocytes from the bloodstream enter the area through blood vessel walls. [ 31 ] Numbers of monocytes in the wound peak one to one and a half days after the injury occurs. [ 23 ] Once they are in the wound site, monocytes mature into macrophages. Macrophages also secrete a number of factors such as growth factors and other cytokines, especially during the third and fourth post-wounding days. These factors attract cells involved in the proliferation stage of healing to the area. [ 15 ]
In wound healing that result in incomplete repair, scar contraction occurs, bringing varying gradations of structural imperfections, deformities and problems with flexibility. [ 32 ] Macrophages may restrain the contraction phase. [ 27 ] Scientists have reported that removing the macrophages from a salamander resulted in failure of a typical regeneration response (limb regeneration), instead bringing on a repair (scarring) response. [ 33 ] [ 34 ]
As inflammation dies down, fewer inflammatory factors are secreted, existing ones are broken down, and numbers of neutrophils and macrophages are reduced at the wound site. [ 19 ] These changes indicate that the inflammatory phase is ending and the proliferative phase is underway. [ 19 ] In vitro evidence, obtained using the dermal equivalent model, suggests that the presence of macrophages actually delays wound contraction and thus the disappearance of macrophages from the wound may be essential for subsequent phases to occur. [ 27 ]
Because inflammation plays roles in fighting infection, clearing debris and inducing the proliferation phase, it is a necessary part of healing. However, inflammation can lead to tissue damage if it lasts too long. [ 7 ] Thus the reduction of inflammation is frequently a goal in therapeutic settings. Inflammation lasts as long as there is debris in the wound. Thus, if the individual's immune system is compromised and is unable to clear the debris from the wound and/or if excessive detritus, devitalized tissue, or microbial biofilm is present in the wound, these factors may cause a prolonged inflammatory phase and prevent the wound from properly commencing the proliferation phase of healing. This can lead to a chronic wound .
About two or three days after the wound occurs, fibroblasts begin to enter the wound site, marking the onset of the proliferative phase even before the inflammatory phase has ended. [ 35 ] As in the other phases of wound healing, steps in the proliferative phase do not occur in a series but rather partially overlap in time.
Also called neovascularization, the process of angiogenesis occurs concurrently with fibroblast proliferation when endothelial cells migrate to the area of the wound. [ 36 ] Because the activity of fibroblasts and epithelial cells requires oxygen and nutrients, angiogenesis is imperative for other stages in wound healing, like epidermal and fibroblast migration. The tissue in which angiogenesis has occurred typically looks red (is erythematous ) due to the presence of capillaries . [ 36 ]
Angiogenesis occurs in overlapping phases in response to inflammation:
Stem cells of endothelial cells , originating from parts of uninjured blood vessels, develop pseudopodia and push through the ECM into the wound site to establish new blood vessels. [ 20 ]
Endothelial cells are attracted to the wound area by fibronectin found on the fibrin scab and chemotactically by angiogenic factors released by other cells, [ 37 ] e.g. from macrophages and platelets when in a low-oxygen environment. Endothelial growth and proliferation is also directly stimulated by hypoxia , and presence of lactic acid in the wound. [ 35 ] For example, hypoxia stimulates the endothelial transcription factor , hypoxia-inducible factor (HIF) to transactivate a set of proliferative genes including vascular endothelial growth factor (VEGF) and glucose transporter 1 (GLUT1).
To migrate, endothelial cells need collagenases and plasminogen activator to degrade the clot and part of the ECM. [ 3 ] [ 19 ] Zinc -dependent metalloproteinases digest basement membrane and ECM to allow cell migration, proliferation and angiogenesis. [ 38 ]
When macrophages and other growth factor-producing cells are no longer in a hypoxic, lactic acid-filled environment, they stop producing angiogenic factors. [ 20 ] Thus, when tissue is adequately perfused , migration and proliferation of endothelial cells is reduced. Eventually blood vessels that are no longer needed die by apoptosis . [ 37 ]
Simultaneously with angiogenesis, fibroblasts begin accumulating in the wound site. Fibroblasts begin entering the wound site two to five days after wounding as the inflammatory phase is ending, and their numbers peak at one to two weeks post-wounding. [ 19 ] By the end of the first week, fibroblasts are the main cells in the wound. [ 3 ] Fibroplasia ends two to four weeks after wounding.
As a model the mechanism of fibroplasia may be conceptualised as an analogous process to angiogenesis (see above) - only the cell type involved is fibroblasts rather than endothelial cells. Initially there is a latent phase where the wound undergoes plasma exudation, inflammatory decontamination and debridement. Oedema increases the wound histologic accessibility for later fibroplastic migration. Second, as inflammation nears completion, macrophage and mast cells release fibroblast growth and chemotactic factors to activate fibroblasts from adjacent tissue. Fibroblasts at this stage loosen themselves from surrounding cells and ECM. Phagocytes further release proteases that break down the ECM of neighbouring tissue, freeing the activated fibroblasts to proliferate and migrate towards the wound. The difference between vascular sprouting and fibroblast proliferation is that the former is enhanced by hypoxia, whilst the latter is inhibited by hypoxia. The deposited fibroblastic connective tissue matures by secreting ECM into the extracellular space, forming granulation tissue (see below). Lastly collagen is deposited into the ECM.
In the first two or three days after injury, fibroblasts mainly migrate and proliferate, while later, they are the main cells that lay down the collagen matrix in the wound site. [ 3 ] Origins of these fibroblasts are thought to be from the adjacent uninjured cutaneous tissue (although new evidence suggests that some are derived from blood-borne, circulating adult stem cells/precursors). [ 39 ] Initially fibroblasts utilize the fibrin cross-linking fibers (well-formed by the end of the inflammatory phase) to migrate across the wound, subsequently adhering to fibronectin. [ 37 ] Fibroblasts then deposit ground substance into the wound bed, and later collagen, which they can adhere to for migration. [ 15 ]
Granulation tissue functions as rudimentary tissue, and begins to appear in the wound already during the inflammatory phase, two to five days post wounding, and continues growing until the wound bed is covered. Granulation tissue consists of new blood vessels, fibroblasts, inflammatory cells, endothelial cells, myofibroblasts, and the components of a new, provisional extracellular matrix (ECM). The provisional ECM is different in composition from the ECM in normal tissue and its components originate from fibroblasts. [ 28 ] Such components include fibronectin, collagen, glycosaminoglycans , elastin , glycoproteins and proteoglycans . [ 37 ] Its main components are fibronectin and hyaluronan , which create a very hydrated matrix and facilitate cell migration. [ 31 ] Later this provisional matrix is replaced with an ECM that more closely resembles that found in non-injured tissue.
Growth factors ( PDGF , TGF-β ) and fibronectin encourage proliferation, migration to the wound bed, and production of ECM molecules by fibroblasts. Fibroblasts also secrete growth factors that attract epithelial cells to the wound site. Hypoxia also contributes to fibroblast proliferation and excretion of growth factors, though too little oxygen will inhibit their growth and deposition of ECM components, and can lead to excessive, fibrotic scarring .
One of fibroblasts' most important duties is the production of collagen . [ 36 ]
Collagen deposition is important because it increases the strength of the wound; before it is laid down, the only thing holding the wound closed is the fibrin-fibronectin clot, which does not provide much resistance to traumatic injury . [ 20 ] Also, cells involved in inflammation, angiogenesis, and connective tissue construction attach to, grow and differentiate on the collagen matrix laid down by fibroblasts. [ 40 ]
Type III collagen and fibronectin generally begin to be produced in appreciable amounts at somewhere between approximately 10 hours [ 41 ] and 3 days, [ 37 ] depending mainly on wound size. Their deposition peaks at one to three weeks. [ 28 ] They are the predominating tensile substances until the later phase of maturation, in which they are replaced by the stronger type I collagen .
Even as fibroblasts are producing new collagen, collagenases and other factors degrade it. Shortly after wounding, synthesis exceeds degradation so collagen levels in the wound rise, but later production and degradation become equal so there is no net collagen gain. [ 20 ] This homeostasis signals the onset of the later maturation phase. Granulation gradually ceases and fibroblasts decrease in number in the wound once their work is done. [ 42 ] At the end of the granulation phase, fibroblasts begin to commit apoptosis, converting granulation tissue from an environment rich in cells to one that consists mainly of collagen. [ 3 ]
The formation of granulation tissue into an open wound allows the reepithelialization phase to take place, as epithelial cells migrate across the new tissue to form a barrier between the wound and the environment. [ 37 ] Basal keratinocytes from the wound edges and dermal appendages such as hair follicles , sweat glands and sebaceous (oil) glands are the main cells responsible for the epithelialization phase of wound healing. [ 42 ] They advance in a sheet across the wound site and proliferate at its edges, ceasing movement when they meet in the middle. In healing that results in a scar, sweat glands, hair follicles [ 43 ] [ 44 ] and nerves do not form. With the lack of hair follicles, nerves and sweat glands, the wound, and the resulting healing scar, provide a challenge to the body with regards to temperature control. [ 44 ]
Keratinocytes migrate without first proliferating. [ 45 ] Migration can begin as early as a few hours after wounding. However, epithelial cells require viable tissue to migrate across, so if the wound is deep it must first be filled with granulation tissue. [ 46 ] Thus the time of onset of migration is variable and may occur about one day after wounding. [ 47 ] Cells on the wound margins proliferate on the second and third day post-wounding in order to provide more cells for migration. [ 28 ]
If the basement membrane is not breached, epithelial cells are replaced within three days by division and upward migration of cells in the stratum basale in the same fashion that occurs in uninjured skin. [ 37 ] However, if the basement membrane is ruined at the wound site, reepithelization must occur from the wound margins and from skin appendages such as hair follicles and sweat and oil glands that enter the dermis that are lined with viable keratinocytes. [ 28 ] If the wound is very deep, skin appendages may also be ruined and migration can only occur from wound edges. [ 46 ]
Migration of keratinocytes over the wound site is stimulated by lack of contact inhibition and by chemicals such as nitric oxide . [ 48 ] Before they begin to migrate, cells must dissolve their desmosomes and hemidesmosomes , which normally anchor the cells by intermediate filaments in their cytoskeleton to other cells and to the ECM. [ 23 ] Transmembrane receptor proteins called integrins , which are made of glycoproteins and normally anchor the cell to the basement membrane by its cytoskeleton , are released from the cell's intermediate filaments and relocate to actin filaments to serve as attachments to the ECM for pseudopodia during migration. [ 23 ] Thus keratinocytes detach from the basement membrane and are able to enter the wound bed. [ 35 ]
Before they begin migrating, keratinocytes change shape, becoming longer and flatter and extending cellular processes like lamellipodia and wide processes that look like ruffles. [ 31 ] Actin filaments and pseudopodia form. [ 35 ] During migration, integrins on the pseudopod attach to the ECM, and the actin filaments in the projection pull the cell along. [ 23 ] The interaction with molecules in the ECM through integrins further promotes the formation of actin filaments, lamellipodia, and filopodia . [ 23 ]
Epithelial cells climb over one another in order to migrate. [ 42 ] This growing sheet of epithelial cells is often called the epithelial tongue. [ 45 ] The first cells to attach to the basement membrane form the stratum basale . These basal cells continue to migrate across the wound bed, and epithelial cells above them slide along as well. [ 45 ] The more quickly this migration occurs, the less of a scar there will be. [ 49 ]
Fibrin , collagen, and fibronectin in the ECM may further signal cells to divide and migrate. Like fibroblasts, migrating keratinocytes use the fibronectin cross-linked with fibrin that was deposited in inflammation as an attachment site to crawl across. [ 25 ] [ 31 ] [ 42 ]
As keratinocytes migrate, they move over granulation tissue but stay underneath the scab, thereby separating the scab from the underlying tissue. [ 42 ] [ 47 ] Epithelial cells have the ability to phagocytize debris such as dead tissue and bacterial matter that would otherwise obstruct their path. Because they must dissolve any scab that forms, keratinocyte migration is best enhanced by a moist environment, since a dry one leads to formation of a bigger, tougher scab. [ 25 ] [ 37 ] [ 42 ] [ 50 ] To make their way along the tissue, keratinocytes must dissolve the clot, debris, and parts of the ECM in order to get through. [ 47 ] [ 51 ] They secrete plasminogen activator , which activates plasminogen , turning it into plasmin to dissolve the scab. Cells can only migrate over living tissue, [ 42 ] so they must excrete collagenases and proteases like matrix metalloproteinases (MMPs) to dissolve damaged parts of the ECM in their way, particularly at the front of the migrating sheet. [ 47 ] Keratinocytes also dissolve the basement membrane, using instead the new ECM laid down by fibroblasts to crawl across. [ 23 ]
As keratinocytes continue migrating, new epithelial cells must be formed at the wound edges to replace them and to provide more cells for the advancing sheet. [ 25 ] Proliferation behind migrating keratinocytes normally begins a few days after wounding [ 46 ] and occurs at a rate that is 17 times higher in this stage of epithelialization than in normal tissues. [ 25 ] Until the entire wound area is resurfaced, the only epithelial cells to proliferate are at the wound edges. [ 45 ]
Growth factors, stimulated by integrins and MMPs, cause cells to proliferate at the wound edges. Keratinocytes themselves also produce and secrete factors, including growth factors and basement membrane proteins, which aid both in epithelialization and in other phases of healing. [ 52 ] Growth factors are also important for the innate immune defense of skin wounds by stimulation of the production of antimicrobial peptides and neutrophil chemotactic cytokines in keratinocytes.
Keratinocytes continue migrating across the wound bed until cells from either side meet in the middle, at which point contact inhibition causes them to stop migrating. [ 31 ] When they have finished migrating, the keratinocytes secrete the proteins that form the new basement membrane. [ 31 ] Cells reverse the morphological changes they underwent in order to begin migrating; they reestablish desmosomes and hemidesmosomes and become anchored once again to the basement membrane. [ 23 ] Basal cells begin to divide and differentiate in the same manner as they do in normal skin to reestablish the strata found in reepithelialized skin. [ 31 ]
Contraction is a key phase of wound healing with repair. If contraction continues for too long, it can lead to disfigurement and loss of function. [ 32 ] Thus there is a great interest in understanding the biology of wound contraction, which can be modelled in vitro using the collagen gel contraction assay or the dermal equivalent model. [ 27 ] [ 53 ]
Contraction commences approximately a week after wounding, when fibroblasts have differentiated into myofibroblasts . [ 54 ] In full thickness wounds, contraction peaks at 5 to 15 days post wounding. [ 37 ] Contraction can last for several weeks [ 46 ] and continues even after the wound is completely reepithelialized. [ 3 ] A large wound can become 40 to 80% smaller after contraction. [ 31 ] [ 42 ] Wounds can contract at a speed of up to 0.75 mm per day, depending on how loose the tissue in the wounded area is. [ 37 ] Contraction usually does not occur symmetrically; rather most wounds have an 'axis of contraction' which allows for greater organization and alignment of cells with collagen. [ 54 ]
At first, contraction occurs without myofibroblast involvement. [ 55 ] Later, fibroblasts, stimulated by growth factors, differentiate into myofibroblasts. Myofibroblasts, which are similar to smooth muscle cells, are responsible for contraction. [ 55 ] Myofibroblasts contain the same kind of actin as that found in smooth muscle cells. [ 32 ]
Myofibroblasts are attracted by fibronectin and growth factors and they move along fibronectin linked to fibrin in the provisional ECM in order to reach the wound edges. [ 25 ] They form connections to the ECM at the wound edges, and they attach to each other and to the wound edges by desmosomes . Also, at an adhesion called the fibronexus , actin in the myofibroblast is linked across the cell membrane to molecules in the extracellular matrix like fibronectin and collagen. [ 55 ] Myofibroblasts have many such adhesions, which allow them to pull the ECM when they contract, reducing the wound size. [ 32 ] In this part of contraction, closure occurs more quickly than in the first, myofibroblast-independent part. [ 55 ]
As the actin in myofibroblasts contracts, the wound edges are pulled together. Fibroblasts lay down collagen to reinforce the wound as myofibroblasts contract. [ 3 ] The contraction stage in proliferation ends as myofibroblasts stop contracting and commit apoptosis. [ 32 ] The breakdown of the provisional matrix leads to a decrease in hyaluronic acid and an increase in chondroitin sulfate, which gradually triggers fibroblasts to stop migrating and proliferating. [ 19 ] These events signal the onset of the maturation stage of wound healing.
When the levels of collagen production and degradation equalize, the maturation phase of tissue repair is said to have begun. [ 20 ] During maturation, type III collagen , which is prevalent during proliferation, is replaced by type I collagen. [ 17 ] Originally disorganized collagen fibers are rearranged, cross-linked, and aligned along tension lines . [ 31 ] The onset of the maturation phase may vary extensively, depending on the size of the wound and whether it was initially closed or left open, [ 28 ] ranging from approximately three days [ 41 ] to three weeks. [ 56 ] The maturation phase can last for a year or longer, similarly depending on wound type. [ 28 ]
As the phase progresses, the tensile strength of the wound increases. [ 28 ] Collagen will reach approximately 20% of its tensile strength after three weeks, increasing to 80% after 12 months. The maximum scar strength is 80% of that of unwounded skin. [ 57 ] Since activity at the wound site is reduced, the scar loses its red appearance as blood vessels that are no longer needed are removed by apoptosis . [ 20 ]
The phases of wound healing normally progress in a predictable, timely manner; if they do not, healing may progress inappropriately to either a chronic wound [ 7 ] such as a venous ulcer or pathological scarring such as a keloid scar . [ 58 ] [ 59 ]
Many factors controlling the efficacy, speed, and manner of wound healing fall under two types: local and systemic factors. [ 2 ]
In the 2000s there arose the first Mathematical models of the healing process, based on simplified assumptions and on a system of differential equations solved through MATLAB . The models show that the "rate of the healing process" appears to be "highly influenced by the activity and size of the injury itself as well as the activity of the healing agent." [ 69 ]
Up until about 2000, the classic paradigm of wound healing, involving stem cells restricted to organ-specific lineages, had never been seriously challenged. Since then, the notion of adult stem cells having cellular plasticity or the ability to differentiate into non-lineage cells has emerged as an alternative explanation. [ 1 ] To be more specific, hematopoietic progenitor cells (that give rise to mature cells in the blood) may have the ability de-differentiate back into hematopoietic stem cells and/or transdifferentiate into non-lineage cells, such as fibroblasts. [ 39 ]
Multipotent adult stem cells have the capacity to be self-renewing and give rise to different cell types. Stem cells give rise to progenitor cells, which are cells that are not self-renewing, but can generate several types of cells. The extent of stem cell involvement in cutaneous (skin) wound healing is complex and not fully understood. [ citation needed ] Stem cell injection leads to wound healing primarily through stimulation of angiogenesis. [ 70 ]
It is thought that the epidermis and dermis are reconstituted by mitotically active stem cells that reside at the apex of rete ridges (basal stem cells or BSC), the bulge of hair follicles (hair follicular stem cell or HFSC), and the papillary dermis (dermal stem cells). [ 1 ] Moreover, bone marrow may also contain stem cells that play a major role in cutaneous wound healing. [ 39 ]
In rare circumstances, such as extensive cutaneous injury, self-renewal subpopulations in the bone marrow are induced to participate in the healing process, whereby they give rise to collagen-secreting cells that seem to play a role during wound repair. [ 1 ] These two self-renewal subpopulations are (1) bone marrow-derived mesenchymal stem cells (MSC) and (2) hematopoietic stem cells (HSC). Bone marrow also harbors a progenitor subpopulation ( endothelial progenitor cells or EPC) that, in the same type of setting, are mobilized to aid in the reconstruction of blood vessels. [ 39 ] Moreover, it is thought that extensive injury to skin also promotes the early trafficking of a unique subclass of leukocytes (circulating fibrocytes ) to the injured region, where they perform various functions related to wound healing. [ 1 ]
An injury is an interruption of morphology and/or functionality of a given tissue. After injury, structural tissue heals with incomplete or complete regeneration. [ 71 ] [ 72 ] Tissue without an interruption to the morphology almost always completely regenerates. An example of complete regeneration without an interruption of the morphology is non-injured tissue, such as skin. [ 73 ] Non-injured skin has a continued replacement and regeneration of cells which always results in complete regeneration. [ 73 ]
There is a subtle distinction between 'repair' and 'regeneration'. [ 1 ] [ 71 ] [ 72 ] Repair means incomplete regeneration . [ 71 ] Repair or incomplete regeneration, refers to the physiologic adaptation of an organ after injury in an effort to re-establish continuity without regards to exact replacement of lost/damaged tissue. [ 71 ] True tissue regeneration or complete regeneration , [ 72 ] refers to the replacement of lost/damaged tissue with an 'exact' copy, such that both morphology and functionality are completely restored. [ 72 ] Though after injury mammals can completely regenerate spontaneously, they usually do not completely regenerate. An example of a tissue regenerating completely after an interruption of morphology is the endometrium ; the endometrium after the process of breakdown via the menstruation cycle heals with complete regeneration. [ 73 ]
In some instances, after a tissue breakdown, such as in skin, a regeneration closer to complete regeneration may be induced by the use of biodegradable ( collagen - glycoaminoglycan ) scaffolds. These scaffolds are structurally analogous to extracellular matrix (ECM) found in normal/un-injured dermis. [ 74 ] Fundamental conditions required for tissue regeneration often oppose conditions that favor efficient wound repair, including inhibition of (1) platelet activation, (2) inflammatory response, and (3) wound contraction. [ 1 ] In addition to providing support for fibroblast and endothelial cell attachment, biodegradable scaffolds inhibit wound contraction, thereby allowing the healing process to proceed towards a more-regenerative/less-scarring pathway. Pharmaceutical agents have been investigated which may be able to turn off myofibroblast differentiation. [ 75 ]
A new way of thinking derived from the notion that heparan sulfates are key player in tissue homeostasis: the process that makes the tissue replace dead cells by identical cells. In wound areas, tissue homeostasis is lost as the heparan sulfates are degraded preventing the replacement of dead cells by identical cells. Heparan sulfate analogues cannot be degraded by all known heparanases and glycanases and bind to the free heparin sulfate binding spots on the ECM, therefore preserving the normal tissue homeostasis and preventing scarring. [ 76 ] [ 77 ] [ 78 ]
Repair or regeneration with regards to hypoxia-inducible factor 1-alpha (HIF-1a). In normal circumstances after injury HIF-1a is degraded by prolyl hydroxylases (PHDs). Scientists found that the simple up-regulation of HIF-1a via PHD inhibitors regenerates lost or damaged tissue in mammals that have a repair response; and the continued down-regulation of Hif-1a results in healing with a scarring response in mammals with a previous regenerative response to the loss of tissue. The act of regulating HIF-1a can either turn off, or turn on the key process of mammalian regeneration. [ 79 ] [ 80 ]
Scarless wound healing is a concept based on the healing or repair of the skin (or other tissue/organs) after injury with the aim of healing with subjectively and relatively less scar tissue than normally expected. Scarless healing is sometimes mixed up with the concept of scar free healing , which is wound healing which results in absolutely no scar ( free of scarring). However they are different concepts.
A reverse to scarless wound healing is scarification (wound healing to scar more). Historically, certain cultures consider scarification attractive; [ 81 ] however, this is generally not the case in the modern western society, in which many patients are turning to plastic surgery clinics with unrealistic expectations. Depending on scar type, treatment may be invasive (intralesional steroid injections, surgery) and/or conservative ( compression therapy , topical silicone gel , brachytherapy , photodynamic therapy ). [ 82 ] Clinical judgment is necessary to successfully balance the potential benefits of the various treatments available against the likelihood of a poor response and possible complications resulting from these treatments. Many of these treatments may only have a placebo effect , and the evidence base for the use of many current treatments is poor. [ 83 ]
Since the 1960s, comprehension of the basic biologic processes involved in wound repair and tissue regeneration have expanded due to advances in cellular and molecular biology . [ 84 ] Currently, the principal goals in wound management are to achieve rapid wound closure with a functional tissue that has minimal aesthetic scarring. [ 85 ] However, the ultimate goal of wound healing biology is to induce a more perfect reconstruction of the wound area. Scarless wound healing only occurs in mammalian foetal tissues [ 86 ] and complete regeneration is limited to lower vertebrates, such as salamanders , and invertebrates . [ 87 ] In adult humans, injured tissue are repaired by collagen deposition, collagen remodelling and eventual scar formation, where fetal wound healing is believed to be more of a regenerative process with minimal or no scar formation. [ 86 ] Therefore, foetal wound healing can be used to provide an accessible mammalian model of an optimal healing response in adult human tissues. Clues as to how this might be achieved come from studies of wound healing in embryos, where repair is fast and efficient and results in essentially perfect regeneration of any lost tissue.
The etymology of the term scarless wound healing has a long history. [ 88 ] [ 89 ] [ 90 ] In print the antiquated concept of scarless healing was brought up in the early 20th century and appeared in a paper published in the London Lancet. This process involved cutting at a surgical slant to the skin surface, rather than at a right angle it; the process was described in various newspapers. [ 88 ] [ 89 ] [ 90 ]
After inflammation, restoration of normal tissue integrity and function is preserved by feedback interactions between diverse cell types mediated by adhesion molecules and secreted cytokines. Disruption of normal feedback mechanisms in cancer threatens tissue integrity and enables a malignant tumor to escape the immune system. [ 91 ] [ 92 ] An example of the importance of the wound healing response within tumors is illustrated in work by Howard Chang and colleagues at Stanford University studying breast cancers. [ 8 ]
Preliminary results are promising for the short and long-term use of oral collagen supplements for wound healing and skin aging. Oral collagen supplements also increase skin elasticity, hydration, and dermal collagen density. Collagen supplementation is generally safe with no reported adverse events. Further studies are needed to elucidate medical use in skin barrier diseases such as atopic dermatitis and to determine optimal dosing regimens. [ 93 ]
Modern wound dressing to aid in wound repair has undergone considerable research and development in recent years. Scientists aim to develop wound dressings which have the following characteristics: [ 94 ]
Cotton gauze dressings have been the standard of care, despite their dry properties that can adhere to wound surfaces and cause discomfort upon removal. Recent research has set out to improve cotton gauze dressings to bring them closer in line to achieve modern wound dressing properties, by coating cotton gauze wound dressing with a chitosan / Ag / ZnO nanocomposite . These updated dressing provide increase water absorbency and improved antibacterial efficacy . [ 94 ]
Dirt or dust on the surface of the wound, bacteria, tissue that has died, and fluid from the wound may be cleaned. The evidence supporting the most effective technique is not clear and there is insufficient evidence to conclude whether cleaning wounds is beneficial for promoting healing or whether wound cleaning solutions ( polyhexamethylene biguanide , aqueous hydrogen peroxide , etc.) are better than sterile water or saline solutions to help venous leg ulcers heal. [ 95 ] It is uncertain whether the choice of cleaning solution or method of application makes any difference to venous leg ulcer healing. [ 95 ]
Considerable effort has been devoted to understanding the physical relationships governing wound healing and subsequent scarring, with mathematical models and simulations developed to elucidate these relationships. [ 96 ] The growth of tissue around the wound site is a result of the migration of cells and collagen deposition by these cells. The alignment of collagen describes the degree of scarring; basket-weave orientation of collagen is characteristic of normal skin, whereas aligned collagen fibers lead to significant scarring. [ 97 ] It has been shown that the growth of tissue and extent of scar formation can be controlled by modulating the stress at a wound site. [ 98 ]
The growth of tissue can be simulated using the aforementioned relationships from a biochemical and biomechanical point of view. The biologically active chemicals that play an important role in wound healing are modeled with Fickian diffusion to generate concentration profiles. The balance equation for open systems when modeling wound healing incorporates mass growth due to cell migration and proliferation. Here the following equation is used:
D t ρ 0 = Div (R) + R 0 ,
where ρ represents mass density, R represents a mass flux (from cell migration), and R 0 represents a mass source (from cell proliferation, division, or enlargement). [ 99 ] Relationships like these can be incorporated into an agent-based models , where the sensitivity to single parameters such as initial collagen alignment, cytokine properties, and cell proliferation rates can be tested. [ 100 ]
Successful wound healing is dependent on various cell types, molecular mediators and structural elements. [ 101 ]
Primary intention is the healing of a clean wound without tissue loss. [ 101 ] In this process, wound edges are brought together, so that they are adjacent to each other (re-approximated). Wound closure is performed with sutures (stitches), staples, or adhesive tape or glue.
Primary intention can only be implemented when the wound is precise and there is minimal disruption to the local tissue and the epithelial basement membrane, e.g. surgical incisions. [ 102 ]
This process is faster than healing by secondary intention. [ 101 ] There is also less scarring associated with primary intention, as there are no large tissue losses to be filled with granulation tissue, though some granulation tissue will form. [ 101 ]
(Delayed primary closure):
If the wound edges are not reapproximated immediately, delayed primary wound healing transpires. This type of healing may be desired in the case of contaminated wounds. By the fourth day, phagocytosis of contaminated tissues is well underway, and the processes of epithelization, collagen deposition, and maturation are occurring. Foreign materials are walled off by macrophages that may metamorphose into epithelioid cells, which are encircled by mononuclear leukocytes, forming granulomas. Usually the wound is closed surgically at this juncture, or the scab is eaten, and if the "cleansing" of the wound is incomplete, chronic inflammation can ensue, resulting in prominent scarring.
Following are the main growth factors involved in wound healing:
The major complications are many:
Other complications can include infection and Marjolin's ulcer .
Advancements in the clinical understanding of wounds and their pathophysiology have commanded significant biomedical innovations in the treatment of acute, chronic, and other types of wounds. Many biologics, skin substitutes , biomembranes and scaffolds have been developed to facilitate wound healing through various mechanisms. [ 108 ] This includes a number of products under the trade names such as Epicel , Laserskin , Transcyte, Dermagraft, AlloDerm/Strattice, Biobrane, Integra, Apligraf, OrCel, GraftJacket and PermaDerm. [ 109 ]
|
https://en.wikipedia.org/wiki/Wound_healing
|
A wound healing assay is a laboratory technique used to study cell migration and cell–cell interaction . This is also called a scratch assay because it is done by making a scratch on a cell monolayer and capturing images at regular intervals by time lapse microscopy. [ 1 ] [ 2 ]
It is specifically a 2D cell migration approach to semi-quantitatively measure cell migration of a sheet of cells. [ 3 ] This scratch can be made through various approaches, such as mechanical, thermal, or chemical damage. [ 4 ] The purpose of this scratch is to produce a cell-free area in hopes of inducing cells to migrate and close the gap. The scratch test is ideal for cell types that migrate in collective epithelial sheets and is not generally useful for non- adherent cells. [ 3 ] Specifically, this assay isn't ideal for chemotaxis studies. [ 5 ]
This laboratory technique has various advantages. First, these tests are relatively cheap, relatively straightforward and allow for real-time measurements. [ 3 ] Additionally, the testing conditions can be easily adjusted to fit different experimental objectives. [ 2 ] This approach also allows for a strong directional migratory response making quantifying data simple. [ 2 ]
One limitation of this assay is that there could be inconsistencies with the depth and size of the scratch. When the scratch is done manually, it's susceptible to 'ragged' edge boundaries, which make analyzing data more difficult. [ 6 ] Also, the damage could physically damage the cells adjacent to the wound and create inaccurate wound size areas. [ 3 ] This limitation is slowly becoming less of an issue with automated technologies. The Electric Cell Impendance Sensing assays utilize to prevent damage to the cells in the underlying extracellular matrix that can likely happen with the manual scratching approaches. [ 3 ] Additionally, the Woundmaker makes fast and uniform wounds across various numbered well-plates options (96 or 384) and allows for high throughput screening, which is a major advantage for various medical research studies. [ 3 ]
Despite the new technology that is increasing this assay's accuracy and efficacy, there are still confounding factors that can skew the assay results, such as cell "crowding", cell/cell adhesion effects and matrix effects. [ 5 ] Additionally, there is still mention with the problem of accumulation of cells at the edge of the scratch, making the cell densities uneven. [ 7 ]
There are some skeptics who think that the scratch created for the assay isn't a very accurate representation of an actual wound. [ 2 ] This is very likely true as real wounds are inherently more complex, but this assay does allow for collective cell movements under defined experimental conditions to provide some insight. [ 2 ]
Despite it being described as straightforward, the technique has been criticized because of inconsistencies in its application from one experiment to another. [ 8 ] [ 2 ]
Outlined is a standard approach to carry-out this assay without the advanced technology: [ 5 ]
Rate of cell migration: [ 2 ]
Relative wound density: [ 2 ]
The above are basic metrics that can be measured with this assay. However efforts are still being made to improve the interpretation of this assay. Three different measurements: direct rate average, regression rate average and average distance regression rate have been evaluated. [ 9 ] Direct rate average and average distance regression rate were more resistant to outliers, whereas regression rate average were more sensitive to outliers. [ 9 ]
The scratch assay is a great tool to study cell migration since this mechanism is involved in many different physiological aspects. [ 7 ] Cell migration plays a huge role in re-epithelialization of the skin and so the study of cell migration can provide advancements in understanding non-healing wounds. [ 7 ] Cell migration is also fundamental in developmental processes such as gastrulation and organogenesis. [ 9 ] Cell migration is also involved in immune responses and cancer metastases. [ 7 ]
With technological advances, this assay is becoming very beneficial especially in the cancer biology realm. A study was performed to better understand the role that claudin-7, a family of tight junction proteins, plays in cell migration in a type of human lung cancer cells. [ 10 ] Due to the slower migration rate of claudin-7 knockdown cells, it supports the idea that this protein is important in cell migration an cell's ability to metastasize. [ 10 ] Cells undergo sheet migration due to a multitude of signals and mechanisms when trying to close a wound, which is believed to be similar to the underlying mechanisms involved in metastasis. [ 4 ]
Using label-free live cell imaging devices based on quantitative phase imaging , it has been shown that cell motility is highly correlated to wound healing and transwell assay results. The advantage of this fully automated approach is that quantification of cell motility does not require specific sample preparation, allowing cell proliferation to be simultaneously quantified as well. [ 11 ] [ 12 ]
|
https://en.wikipedia.org/wiki/Wound_healing_assay
|
Wouthuysen–Field coupling , or the Wouthuysen–Field effect , is a mechanism that couples the excitation temperature , also called the spin temperature, of neutral hydrogen to Lyman-alpha radiation. This coupling plays a role in producing a difference in the temperature of neutral hydrogen and the cosmic microwave background at the end of the Dark Ages and the beginning of the epoch of reionization . It is named for Siegfried Adolf Wouthuysen and George B. Field .
The period after recombination occurred and before stars and galaxies formed is known as the "dark ages". During this time, the majority of matter in the universe is neutral hydrogen. This hydrogen has yet to be observed, but there are experiments underway to detect the hydrogen line produced during this era. The hydrogen line is produced when an electron in a neutral hydrogen atom is excited to the triplet spin state, or de-excited as the electron and proton spins go to the singlet state. The energy difference between these two hyperfine states is 5.9 × 10 − 6 {\displaystyle 5.9\times 10^{-6}} electron volts , with a wavelength of 21 centimeters. At times when neutral hydrogen is in thermodynamic equilibrium with the photons in the cosmic microwave background (CMB), the neutral hydrogen and CMB are said to be "coupled", and the hydrogen line is not observable. It is only when the two temperatures differ, i.e. are decoupled, that the hydrogen line can be observed. [ 1 ]
Wouthuysen–Field coupling is a mechanism that couples the spin temperature of neutral hydrogen to Lyman-alpha radiation, which decouples the neutral hydrogen from the CMB. The energy of the Lyman-alpha transition is 10.2 eV—this energy is approximately two million times greater than the hydrogen line, and is produced by astrophysical sources such as stars and quasars . Neutral hydrogen absorbs Lyman-alpha photons, and then re-emits Lyman-alpha photons, and may enter either of the two spin states. This process causes a redistribution of the electrons between the hyperfine states, decoupling the neutral hydrogen from the CMB photons. [ 2 ]
The coupling between Lyman-alpha photons and the hyperfine states depends not on the intensity of the Lyman-alpha radiation, but on the shape of the spectrum in the vicinity of the Lyman-alpha transition. That this mechanism might affect the population of the hyperfine states in neutral hydrogen was first suggested in 1952 by S. A. Wouthuysen, and then further developed by George B. Field in 1959. [ 2 ] [ 3 ] [ 4 ]
The effect of Lyman-alpha photons on the hyperfine levels depends upon the relative intensities of the red and blue wings of the Lyman-alpha line, reflecting the very small difference in energy of the hyperfine states relative to the Lyman-alpha transition. At a cosmological redshift of z ∼ 6 {\displaystyle z\sim 6} , Wouthuysen–Field coupling is expected to raise the spin temperature of neutral hydrogen above that of the CMB, and produce emission in the hydrogen line. [ 5 ]
A hydrogen line signal produced by Wouthuysen–Field coupling has not yet been observed. There are multiple experiments and radio observatories that aim to detect the neutral hydrogen line the Dark Ages and epoch of reionization, the time at which Wouthuysen–Field coupling is expected to be important. These include the Giant Metrewave Radio Telescope , the Precision Array for Probing the Epoch of Reionization , the Murchison Widefield Array , and the Large Aperture Experiment to Detect the Dark Ages . [ 6 ] Proposed observatories that aim to detect evidence of Wouthuysen–Field coupling include the Square Kilometer Array and the Dark Ages Radio Explorer .
|
https://en.wikipedia.org/wiki/Wouthuysen–Field_coupling
|
Measurement of wow and flutter is carried out on audio tape machines, cassette recorders and players, and other analog recording and reproduction devices with rotary components (e.g. movie projectors, turntables (vinyl recording), etc.) This measurement quantifies the amount of 'frequency wobble' (caused by speed fluctuations) present in subjectively valid terms. Turntables tend to suffer mainly slow wow . In digital systems, which are locked to crystal oscillators , variations in clock timing are referred to as wander or jitter , depending on speed.
While the terms wow and flutter used to be [ when? ] used separately (for wobbles at a rate below and above 4 Hz respectively), they tend to be combined now [ when? ] that universal standards exist for measurement which take both into account simultaneously. Listeners find flutter most objectionable when the actual frequency of wobble is 4 Hz, and less audible above and below this rate. This fact forms the basis for the weighting curve shown here. The weighting curve is misleading, inasmuch as it presumes inaudibility of flutters above 200 Hz, when actually faster flutters are quite damaging to the sound. A flutter of 200 Hz at a level of -50db will create 0.3% intermodulation distortion, which would be considered unacceptable in a preamp or amplifier.
Measuring instruments use a frequency discriminator to translate the pitch variations of a recorded tone into a flutter waveform, which is then passed through the weighting filter, before being full-wave rectified to produce a slowly varying signal which drives a meter or recording device. The maximum meter indication should be read as the flutter value.
The following standards all specify the weighting filter shown above, together with a special slow-quasi-peak full-wave rectifier designed to register any brief speed excursions. As with many audio standards, these are identical derivatives of a common specification.
Measurement is usually made on a 3.15 kHz (or sometimes 3 kHz) tone, a frequency chosen because it is high enough to give good resolution, but low enough not to be affected by drop-outs and high-frequency losses. Ideally, flutter should be measured using a pre-recorded tone free from flutter. Record-replay flutter will then be around twice as high as pre-recorded, because worst case variations will add during recording and playback. When a recording is played back on the same machine it was made on, a very slow change from low to high flutter will often be observed, because any cyclic flutter caused by capstan rotation may go from adding to cancelling as the tape slips slightly out of synchronism. A good technique is to stop the tape from time to time and start it again. This will often result in different readings as the correlation between record and playback flutter shifts. On well maintained, precise machines, it may be difficult to procure a reference tape with higher tolerances. Therefore, a record-playback test using the stop-start technique, can be, for practical purposes, the best that can be accomplished.
Wow and flutter are particularly audible on music with oboe, string, guitar, flute, brass, or piano solo playing. While wow is perceived clearly as pitch variation, flutter can alter the sound of the music differently, making it sound ‘cracked’ or ‘ugly’. A recorded 1 kHz tone with a small amount of flutter (around 0.1%) can sound fine in a ‘dead’ listening room, but in a reverberant room constant fluctuations will often be clearly heard. [ citation needed ] These are the result of the current tone ‘beating’ with its echo, which since it originated slightly earlier, has a slightly different pitch. What is heard is quite pronounced amplitude variation, which the ear is very sensitive to. This probably explains why piano notes sound ‘cracked’. Because they start loud and then gradually tail off, piano notes leave an echo that can be as loud as the dying note that it beats with, resulting in a level that varies from complete cancellation to double-amplitude at a rate of a few Hz: instead of a smoothly dying note we hear a heavily modulated one. Oboe notes may be particularly affected because of their harmonic structure. Another way that flutter manifests is as a truncation of reverb tails. This may be due to the persistence of memory with regard to spatial location based on early reflections and comparison of Doppler effects over time. The auditory system may become distracted by pitch shifts in the reverberation of a signal that should be of fixed and solid pitch.
The term "flutter echo" is used in relation to a particular form of reverberation that flutters in amplitude. It has no direct connection with flutter as described here, though the mechanism of modulation through cancellation may have something in common with that described above.
Absolute speed error causes a change in pitch, and it is useful to know that a semitone in music represents a 6% frequency change. This is because Western music uses the ‘equal temperament scale' based on a constant geometric ratio between twelve notes; and the twelfth root of 2 is 1.05946. Anyone with a good musical ear can detect a pitch change of around 1%, though an error of up to 3% is likely to go unnoticed, except by those few with ‘absolute pitch’. Most ‘movie’ films shown on European television are sped up by 4.166% because they were shot at 24 frames per second, but are scanned at 25 frames per second to match the PAL standard of 25 frame/s 50 field/s. This causes a noticeable increase in pitch on voices, which often brings surprised comment from the actors themselves when they hear their performance on video. It can also frustrate attempts to play along with film music, which is closer to a semitone sharp than its intended pitch. Recently, digital pitch correction has been applied to some films, which corrects the pitch without altering lip-sync, by adding in extra cycles of sound. This has to be regarded as a form of distortion, as there is no way to change the pitch of a sound without also slowing it down that does not change the waveform itself.
High-frequency flutter, above 100 Hz, can sometimes result from tape vibrating as it passes over a head (or other non-rotating element in the tape path), as a result of rapidly interacting stretching in the tape and stick-slip at the head. This is termed 'scrape flutter'. It adds a roughness to the sound that is not typical of wow & flutter, and damping devices or heavy rollers are sometimes employed on professional tape machines to reduce or prevent it. Scrape flutter measurement requires special techniques, often using a 10 kHz tone.
|
https://en.wikipedia.org/wiki/Wow_and_flutter_measurement
|
Woz U is a company founded by Apple co-founder Steve Wozniak that focuses on technical education for independent students, and offers curriculum to universities and organizations to upskill their employees. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
WOZ U was founded in October 2017 by Apple co-founder Steve Wozniak. Wozniak was inspired by his own experience of teaching 5th Grade students in California. [ 5 ] Woz U received the school license from the Arizona state board. [ 6 ] In the first year, the school had 350 students, as confirmed by the Arizona State Board for Private Post secondary Education. [ 6 ]
In fall of 2018, a CBS News investigation of Woz U cast some doubts on the professionalism of the expensive curriculum. CBS interviewed two dozen current and former students and employees, who shared their dissatisfaction with the content quality, such as documentation typos leading to confusing program errors, while some promised live lectures were actually recorded and out-of-date. One student described the 33-week online program as "a $13,000 e-book". A former "enrollment counselor" described a high-pressure sales environment, which the company denied. In a prepared statement, Woz U president Chris Coleman admitted the documentation errors and said quality control efforts were being implemented, and said curriculum was reviewed by Wozniak. The founder declined interview requests, then dodged a reporter's unannounced appearance at a conference. [ 7 ] [ 8 ]
Woz U offers courses in Software Development , Cyber Security and Data Science that lasts approximately 33 weeks, with one to two hours of lectures a week. These courses provide graded assignments, weekly exercises, and a final project. As of March 2019, Woz U offered three technology-focused educational models for Software Development, Cyber Security and Data Science.
Woz U works on the Education-as-a-Service (EaaS) model to students an alternative or supplement to the traditional four-year degree programs. [ 9 ] The students take a micro course to learn software development , cyber security and data science . [ 10 ] [ 11 ] As of December 2018, Woz U had 350 registered students signed up for its programs. [ 6 ] Woz U charges $13,200 to $13,800 as a fee from the students for the courses. [ 12 ] [ 6 ]
It also partners with businesses to offer a technical curriculum to employees in order to meet the upskill demand for the technology based workforce. [ 13 ]
Woz U has launched career pathway programs via STEAM initiatives to school districts across the United States. [ 3 ] It also works with colleges and universities across the United States. University partners include University of North Dakota , [ 14 ] University of the Potomac , [ 15 ] Belhaven University and New Jersey Institute of Technology . [ 16 ] It enables University partners to incorporate the Woz U technical curriculum into traditional college and university coursework. [ 17 ]
Woz U also works directly with businesses to upskill their employees in Software Development, Cyber Security and Data Science, to remain current with the technology developments.
|
https://en.wikipedia.org/wiki/Woz_U
|
In coding theory , the Wozencraft ensemble is a set of linear codes in which most of codes satisfy the Gilbert-Varshamov bound . It is named after John Wozencraft , who proved its existence. The ensemble is described by Massey (1963) , who attributes it to Wozencraft. Justesen (1972) used the Wozencraft ensemble as the inner codes in his construction of strongly explicit asymptotically good code.
Here relative distance is the ratio of minimum distance to block length. And H q {\displaystyle H_{q}} is the q-ary entropy function defined as follows:
In fact, to show the existence of this set of linear codes, we will specify this ensemble explicitly as follows: for α ∈ F q k − { 0 } {\displaystyle \alpha \in \mathbb {F} _{q^{k}}-\{0\}} , define the inner code
Here we can notice that x ∈ F q k {\displaystyle x\in \mathbb {F} _{q}^{k}} and α ∈ F q k {\displaystyle \alpha \in \mathbb {F} _{q^{k}}} . We can do the multiplication α x {\displaystyle \alpha x} since F q k {\displaystyle \mathbb {F} _{q}^{k}} is isomorphic to F q k {\displaystyle \mathbb {F} _{q^{k}}} .
This ensemble is due to Wozencraft and is called the Wozencraft ensemble.
For all x , y ∈ F q k {\displaystyle x,y\in \mathbb {F} _{q}^{k}} , we have the following facts:
So C i n α {\displaystyle C_{in}^{\alpha }} is a linear code for every α ∈ F q k − { 0 } {\displaystyle \alpha \in \mathbb {F} _{q^{k}}-\{0\}} .
Now we know that Wozencraft ensemble contains linear codes with rate 1 2 {\displaystyle {\tfrac {1}{2}}} . In the following proof, we will show that there are at least ( 1 − ε ) N {\displaystyle (1-\varepsilon )N} those linear codes having the relative distance ⩾ H q − 1 ( 1 2 − ε ) {\displaystyle \geqslant H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)} , i.e. they meet the Gilbert-Varshamov bound.
To prove that there are at least ( 1 − ε ) N {\displaystyle (1-\varepsilon )N} number of linear codes in the Wozencraft ensemble having relative distance ⩾ H q − 1 ( 1 2 − ε ) {\displaystyle \geqslant H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)} , we will prove that there are at most ε N {\displaystyle \varepsilon N} number of linear codes having relative distance < H q − 1 ( 1 2 − ε ) {\displaystyle <H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)} i.e., having distance < H q − 1 ( 1 2 − ε ) ⋅ 2 k . {\displaystyle <H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)\cdot 2k.}
Notice that in a linear code, the distance is equal to the minimum weight of all codewords of that code. This fact is the property of linear code . So if one non-zero codeword has weight < H q − 1 ( 1 2 − ε ) ⋅ 2 k {\displaystyle <H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)\cdot 2k} , then that code has distance < H q − 1 ( 1 2 − ε ) ⋅ 2 k . {\displaystyle <H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)\cdot 2k.}
Let P {\displaystyle P} be the set of linear codes having distance < H q − 1 ( 1 2 − ε ) ⋅ 2 k . {\displaystyle <H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)\cdot 2k.} Then there are | P | {\displaystyle |P|} linear codes having some codeword that has weight < H q − 1 ( 1 2 − ε ) ⋅ 2 k . {\displaystyle <H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)\cdot 2k.}
Any linear code having distance < H q − 1 ( 1 2 − ε ) ⋅ 2 k {\displaystyle <H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)\cdot 2k} has some codeword of weight < H q − 1 ( 1 2 − ε ) ⋅ 2 k . {\displaystyle <H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)\cdot 2k.} Now the Lemma implies that we have at least | P | {\displaystyle |P|} different y {\displaystyle y} such that w t ( y ) < H q − 1 ( 1 2 − ε ) ⋅ 2 k {\displaystyle wt(y)<H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)\cdot 2k} (one such codeword y {\displaystyle y} for each linear code). Here w t ( y ) {\displaystyle wt(y)} denotes the weight of codeword y {\displaystyle y} , which is the number of non-zero positions of y {\displaystyle y} .
Denote
Then: [ 1 ]
So | P | < ε N {\displaystyle |P|<\varepsilon N} , therefore the set of linear codes having the relative distance ⩾ H q − 1 ( 1 2 − ε ) ⋅ 2 k {\displaystyle \geqslant H_{q}^{-1}\left({\tfrac {1}{2}}-\varepsilon \right)\cdot 2k} has at least N − ε N = ( 1 − ε ) N {\displaystyle N-\varepsilon N=(1-\varepsilon )N} elements.
|
https://en.wikipedia.org/wiki/Wozencraft_ensemble
|
Wreck Racing is a Georgia Tech automotive competition team, based in the Woodruff School of Mechanical Engineering . The team is composed of undergraduate and graduate students from the various schools within Georgia Tech and is based in the Student Competition Center on the North edge of Tech's Atlanta campus. The team's main focus is in the design, fabrication, testing, and racing of production-based sports cars. Wreck Racing primarily competes in the Grassroots Motorsports Annual Challenge , but also has competed in local SCCA and BMWCCA events.
The Wreck Racing team was founded by Andrew Sullivan and Andy Powell, both Tech Alumni, and has been an officially chartered student organization since 2004. [ 1 ] [ 2 ] Since the team was founded, membership has grown rapidly, and the team currently has approximately 75 active members.
Since 2005 Wreck Racing has competed in the annual GRM $20XX event and has competed with eight separate vehicles including a 1980s VW GTI, a turbocharged E30 chassis BMW, a V8-powered Mazda Miata, a 2JZ MG Midget, a mid-engine Honda Insight, a V8 swapped BMW E28, a mid-engine Chevrolet S-10, and a Honda powered Factory Five 818.
$2010 Challenge: In 2010 Wreck Racing won the GRM $2010 Challenge, placing 1st overall, 1st in autocross, and 1st in concours, placing above many veteran teams and entries by several professional race shops. [ 3 ] [ 4 ]
$2017 Challenge: Wreck Racing won 1st place in autocross, 1st place in concours, and 1st place overall in the Grassroot Motorsports $2017 Challenge with their 2001 Honda Insight powered by a mid-mounted Subaru EG33 engine. [ 5 ]
$2020 Challenge: Wreck Racing won 1st place in autocross, 2nd place in Concours d'Elegance, 5th place in drag, and 1st place overall in the Grassroot Motorsports $2020 Challenge with their V8 powered BMW E28.
Another primary focus of Wreck Racing is to fortify scientific design and analysis methods taught in the Georgia Tech classrooms through practical application. The team frequently utilizes finite element and signal analysis, circuit design, CAD modeling and drawing, thermodynamic and fluid dynamic analysis, along with advanced fabrication techniques, such as state-of-the-art CNC milling, radiator fin augmentation, waterjet and plasma cutting , lathe operation, welding, and precision measuring with the goal of producing exceptional vehicle performance while remaining within the severe budget and time restrictions mandated by the competition rules. [ 3 ]
|
https://en.wikipedia.org/wiki/Wreck_Racing
|
The Wrigley Trophy is an award given for motorboats . It was awarded as early as 1912 with a $1,500 cash prize. In 1912 the award was disputed when James A. Pugh contested the win by J. Stuart Blackton . He argued that Baby Reliance II was allowed a late entry and had already missed two rounds of competition. [ 1 ]
|
https://en.wikipedia.org/wiki/Wrigley_Trophy
|
A wrinklon is a type of quasiparticle introduced in the study of wrinkling behavior in thin sheet materials, such as graphene or fabric . It is a localized excitation corresponding to wrinkles in a constrained two dimensional system. [ 1 ]
It represents a localized region where two wrinkles in the material merge into one, serving as part of the pattern seen when the material forms wrinkles. The term "wrinklon" is derived from "wrinkle" and the suffix "-on", the latter commonly used in physics to denote quasiparticles, such as the " phonon " or " polaron ".
The concept of wrinklons aids in understanding and describing the complex wrinkling patterns observed in a variety of materials. This understanding could prove useful in fields such as material science and nanotechnology , particularly in the study and development of two-dimensional materials like graphene. [ 2 ]
Further studies have expanded the understanding of wrinklons, demonstrating that the behavior of these wrinkles in thin films , such as graphene, can differ depending on the substrate they are on. For instance, when graphene is on a compliant polymer substrate , the properties of the wrinklons change with the thickness of the graphene.
This suggests that the characteristics of the substrate have a significant role in wrinklon formation and behavior, which is important to consider in various applications of thin film materials. [ 3 ]
|
https://en.wikipedia.org/wiki/Wrinklon
|
Wristbands are encircling strips worn on the wrist or lower forearm. The term may refer to a bracelet -like band, similar to that of a wristwatch , to the cuff or other part of a sleeve that covers the wrist, or decorative or functional bands worn on the wrist for many different reasons. Wristbands are often worn and used similarly to event passes such as lanyards to information or allow people entry to events. These wristbands are made from loops of plastic that are placed around the wrist and are used for identification purposes (demonstrating the wearer's authorization to be at a venue , for example).
Another type of wristband is the sweatband ; usually made of a towel-like terrycloth material. These are usually used to wipe sweat from the forehead during sport but have been known to be used as a badge or fashion statement . A practice common in mid-1980s punk subculture was to cut the top off of a sock and fashion the elastic into this type of wristband.
In the early-to-mid-2000s (decade), bracelets often made of silicone became popular. They are worn to demonstrate the wearer's support of a cause or charitable organization , similar to awareness ribbons . Such wristbands are sometimes called awareness bracelets to distinguish them from other types of wristbands. In early 2007 they became an increasingly popular item being sold as merchandise at concerts & sporting events worldwide. The wristbands bearing official logos or trademarks enabled the seller to offer a low price point merchandise option to fans. Silicone wristbands may also be called gel wristbands, jelly wristbands, rubber wristbands and fundraising wristbands. [ 1 ] All of these wristbands are made from the same silicone material.
UV Ultra Violet Sensitive silicone wristbands appear clear/white when out of UV light, but when exposed to ultra violet light such as sunlight the wristbands' color changes to blue or fuchsia. These bands can be used as reminders for people to apply sunscreen or stay in the shade on hot summer days.
Hospital wristbands are a commonly used safety device for identifying patients undergoing medical care (see patient safety and medical identification tag ). Available in a variety of sizes to accommodate patients as small as newborns and as large as obese adults, hospital wristbands can be handwritten, embossed, laser-printed or thermal-imaged with names, pictures, medical record numbers, barcodes and other personal identifiers.
Laser printing and thermal imaging—the most advanced technologies for personalizing hospital wristbands—support fonts, colors and barcodes for improved patient safety through electronic patient and medication tracking. Handwritten and embossed wristbands remain in widespread use, however, despite findings on compromised safety reported in 2007. The National Patient Safety Agency (NPSA) found that as many as 2,900 patients each year were receiving the wrong medical care because of the hospital staff's inability to read damaged or otherwise illegible patient information on handwritten and embossed wristbands. [ 2 ]
Colored wristbands are often given to people attending events such as music festivals and gigs as an access control measure. Counterfeit wristbands are increasingly common. [ 3 ] [ 4 ]
Silicone wristbands (sometimes referred to as gel bracelets ) are popular for fundraising or showing support for a cause. An event organizer might create a custom wristband to give out or sell to those interested in an event or supporting a cause. Some people keep the wristbands as souvenirs or wear the wristbands after the event to show what events they went to. [ 5 ]
Further uses for wristbands include event ticketing at music festivals, and sporting events may include an NFC ( near field communication ) chip that would allow contactless payment at the concessions and turnstiles. Wristbands are ideal to use for dark environments such as nightclubs and bars, or outdoor venues where patrons can be far away, such as festivals and theme parks.
In addition, these styles of colored wristbands are used alongside hospital patient bands to serve as an extra safety reminder and alert for allergies . They will have a standard color and clearly written labeling, such as "Fall Risks" (which may come from medical conditions, injuries and/or medications used), "Allergies" (to cover allergic reactions), or "Latex Allergies" (to make sure medical safety gloves are not made of latex), amongst several other important cautions that would help protect the patient by preventing iatrogenic mistakes.
|
https://en.wikipedia.org/wiki/Wristband
|
A write buffer is a type of data buffer that can be used to hold data being written from the cache to main memory or to the next cache in the memory hierarchy to improve performance and reduce latency . It is used in certain CPU cache architectures like Intel's x86 and AMD64. [ 1 ] In multi-core systems, write buffers destroy sequential consistency . Some software disciplines, like C11's data-race-freedom, [ 2 ] are sufficient to regain a sequentially consistent view of memory.
A variation of write-through caching is called buffered write-through . [ citation needed ]
Use of a write buffer in this manner frees the cache to service read requests while the write is taking place. It is especially useful for very slow main memory in that subsequent reads are able to proceed without waiting for long main memory latency. When the write buffer is full (i.e. all buffer entries are occupied), subsequent writes still have to wait until slots are freed. Subsequent reads could be served from the write buffer. To further mitigate this stall, one optimization called write buffer merge may be implemented. Write buffer merge combines writes that have consecutive destination addresses into one buffer entry. Otherwise, they would occupy separate entries which increases the chance of pipeline stall.
A victim buffer is a type of write buffer that stores dirty evicted lines in write-back caches [ note 1 ] so that they get written back to main memory. Besides reducing pipeline stall by not waiting for dirty lines to write back as a simple write buffer does, a victim buffer may also serve as a temporary backup storage when subsequent cache accesses exhibit locality , requesting those recently evicted lines, which are still in the victim buffer.
The store buffer was invented by IBM during Project ACS between 1964 and 1968, [ 3 ] but it was first implemented in commercial products in the 1990s.
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Write_buffer
|
A wrong-way driver warning is an advanced driver-assistance system introduced in 2010 [ 1 ] [ 2 ] to prevent wrong-way driving .
In the case of signs imposing access restrictions, through the wrong-way driver warning function an acoustic warning is emitted together with a visual warning in the instrument cluster – making an effective [ citation needed ] contribution towards helping to prevent serious accidents caused by wrong-way drivers.
This article about an automotive technology is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Wrong-way_driving_warning
|
Wenjun Wu 's method is an algorithm for solving multivariate polynomial equations introduced in the late 1970s by the Chinese mathematician Wen-Tsun Wu . This method is based on the mathematical concept of characteristic set introduced in the late 1940s by J.F. Ritt . It is fully independent of the Gröbner basis method, introduced by Bruno Buchberger (1965), even if Gröbner bases may be used to compute characteristic sets. [ 1 ] [ 2 ]
Wu's method is powerful for mechanical theorem proving in elementary geometry , and provides a complete decision process for certain classes of problem. It has been used in research in his laboratory (KLMM, Key Laboratory of Mathematics Mechanization in Chinese Academy of Science) and around the world. The main trends of research on Wu's method concern systems of polynomial equations of positive dimension and differential algebra where Ritt 's results have been made effective. [ 3 ] [ 4 ] Wu's method has been applied in various scientific fields, like biology, computer vision , robot kinematics and especially automatic proofs in geometry. [ 5 ]
Wu's method uses polynomial division to solve problems of the form:
where f is a polynomial equation and I is a conjunction of polynomial equations . The algorithm is complete for such problems over the complex domain .
The core idea of the algorithm is that you can divide one polynomial by another to give a remainder. Repeated division results in either the remainder vanishing (in which case the I implies f statement is true), or an irreducible remainder is left behind (in which case the statement is false).
More specifically, for an ideal I in the ring k [ x 1 , ..., x n ] over a field k , a (Ritt) characteristic set C of I is composed of a set of polynomials in I , which is in triangular shape: polynomials in C have distinct main variables (see the formal definition below). Given a characteristic set C of I , one can decide if a polynomial f is zero modulo I . That is, the membership test is checkable for I , provided a characteristic set of I .
A Ritt characteristic set is a finite set of polynomials in triangular form of an ideal. This triangular set satisfies
certain minimal condition with respect to the Ritt ordering, and it preserves many interesting geometrical properties
of the ideal. However it may not be its system of generators.
Let R be the multivariate polynomial ring k [ x 1 , ..., x n ] over a field k .
The variables are ordered linearly according to their subscript: x 1 < ... < x n .
For a non-constant polynomial p in R, the greatest variable effectively presenting in p , called main variable or class , plays a particular role: p can be naturally regarded as a univariate polynomial in its main variable x k with coefficients in k [ x 1 , ..., x k −1 ].
The degree of p as a univariate polynomial in its main variable is also called its main degree.
A set T of non-constant polynomials is called a triangular set if all polynomials in T have distinct main variables. This generalizes triangular systems of linear equations in a natural way.
For two non-constant polynomials p and q , we say p is smaller than q with respect to Ritt ordering and written as p < r q , if one of the following assertions holds:
In this way, ( k [ x 1 , ..., x n ],< r ) forms a well partial order . However, the Ritt ordering is not a total order :
there exist polynomials p and q such that neither p < r q nor p > r q . In this case, we say that p and q are not comparable.
The Ritt ordering is comparing the rank of p and q . The rank, denoted by rank( p ), of a non-constant polynomial p is defined to be a power of
its main variable: mvar( p ) mdeg( p ) and ranks are compared by comparing first the variables and then, in case of equality of the variables, the degrees.
A crucial generalization on Ritt ordering is to compare triangular sets.
Let T = { t 1 , ..., t u } and S = { s 1 , ..., s v } be two triangular sets
such that polynomials in T and S are sorted increasingly according to their main variables.
We say T is smaller than S w.r.t. Ritt ordering if one of the following assertions holds
Also, there exists incomparable triangular sets w.r.t Ritt ordering.
Let I be a non-zero ideal of k[x 1 , ..., x n ]. A subset T of I is a Ritt characteristic set of I if one of the following conditions holds:
A polynomial ideal may possess (infinitely) many characteristic sets, since Ritt ordering is a partial order.
The Ritt–Wu process, first devised by Ritt, subsequently modified by Wu, computes not a Ritt characteristic but an extended one, called Wu characteristic set or ascending chain.
A non-empty subset T of the ideal ⟨F⟩ generated by F is a Wu characteristic set of F if one of the following condition holds
Wu characteristic set is defined to the set F of polynomials, rather to the ideal ⟨F⟩ generated by F. Also it can be shown that a Ritt characteristic set T of ⟨F⟩ is a Wu characteristic set of F. Wu characteristic sets can be computed by Wu's algorithm CHRST-REM, which only requires pseudo-remainder computations and no factorizations are needed.
Wu's characteristic set method has exponential complexity; improvements in computing efficiency by weak chains, regular chains , saturated chain were introduced [ 6 ]
An application is an algorithm for solving systems of algebraic equations by means of characteristic sets. More precisely, given a finite subset F of polynomials, there is an algorithm to compute characteristic sets T 1 , ..., T e such that:
where W ( T i ) is the difference of V ( T i ) and V ( h i ), here h i is the product of initials of the polynomials in T i .
|
https://en.wikipedia.org/wiki/Wu's_method_of_characteristic_set
|
WuXi AppTec ( WuXi is pronounced as Wu-shee ) is a global pharmaceutical , biopharmaceutical , and medical device company.
WuXi PharmaTech was founded in December 2000 in Shanghai by organic chemist Ge Li . [ 1 ] [ 2 ]
The company opened chemistry facilities in Tianjin in 2007. In 2008, WuXi PharmaTech acquired AppTec Laboratory Services Inc., a US-based company founded in 2001 with expertise in medical-device and biologics testing and with facilities in St. Paul, MN; Philadelphia, PA; and Atlanta, GA. In the same year the company was renamed WuXi AppTec. [ 3 ] [ 4 ] WuXi AppTec opened a toxicology facility in Suzhou in 2009. [ 5 ]
WuXi AppTec's subsidiary, WuXi STA, opened a large-scale manufacturing facility in Jinshan in 2010. The company began a biologics discovery, development, and manufacturing operation in Shanghai and Wuxi City in 2011. At the same year, WuXi AppTec acquired MedKey, a China-based clinical research company, [ 6 ] and Abgent , a San Diego company and one of the world's largest manufacturers of antibodies for biological research and drug discovery. [ 7 ] In 2012, WuXi AppTec opened a chemistry facility in Wuhan and a GMP biologics drug-substance facility in Wuxi City. That year WuXi AppTec also entered into a joint venture with MedImmune , the biologics arm of AstraZeneca , to co-develop MEDI5117, an anti-IL6 antibody for rheumatoid arthritis for the Chinese market. [ 8 ] In 2013, WuXi AppTec formed a joint venture with the global clinical contract research organization PRA International (now called PRA Health Sciences ) to build a clinical research business in China. [ 9 ]
In 2014, WuXi AppTec opened a new biologics biosafety testing facility in Suzhou . In the same year, WuXi AppTec acquired XenoBiotic Laboratories, Inc. (XBL), a contract research organization with 27 years of operation that provides bioanalytical, drug metabolism, and pharmacokinetic services to the pharmaceutical, animal health, and agrochemical industries. [ 10 ]
In 2015, the company launched the subsidiary WuXi NextCODE following the acquisition of NextCODE Health , a bioinformatics startup company which emerged from the Icelandic firm deCODE genetics in 2013. [ 11 ]
In 2015, WuXi AppTec completed its merger with WuXi Merger Limited, a wholly owned subsidiary of New WuXi Life Science Limited. As a result of the merger, New WuXi Life Science Limited acquired the company in a cash transaction valued at approximately US$3.3 billion. [ 12 ] In 2016, WuXi STA opened a new campus in Changzhou [ 13 ] and operations in San Diego. In the same year, WuXi AppTec acquired Crelux GmbH, a structure-based drug discovery provider based in Munich, Germany . [ 14 ] In 2017, WuXi AppTec acquired HD Biosciences (HDB), a biology focused preclinical drug discovery contract research organization (CRO). [ 15 ]
In January 2024, WuXi AppTec's share price fell on news that the United States Congress had introduced legislation to block any federal government contracts with the company due to national security concerns. [ 16 ] [ 17 ] [ 18 ] The concerns stem from allegations, denied by the company, that it has worked closely with the People's Liberation Army (PLA) as a part of the Chinese Communist Party (CCP)'s military-civil fusion strategy. [ 19 ] Members of the United States House Select Committee on Strategic Competition between the United States and the Chinese Communist Party subsequently called for sanctions against WuXi AppTec. [ 20 ] WuXi AppTec denied any affiliation with the CCP and PLA and launched a lobbying campaign in the US in response. [ 21 ] According to the Federal Bureau of Investigation , United States Department of State , and Director of National Intelligence , WuXi AppTec transferred US intellectual property to the PLA without consent. [ 22 ]
In October 2024, WuXi AppTec was reportedly considering selling its facilities in the U.S. and Europe due to the Biosecure Act , which has been discouraging potential clients. [ 23 ] In December 2024, WuXi AppTec announced that it would sell its Oxford Genetics and WuXi Advanced Therapies to a U.S.-based private equity firm. [ 24 ]
|
https://en.wikipedia.org/wiki/WuXi_AppTec
|
The Wulff construction is a method to determine the equilibrium shape of a droplet or crystal of fixed volume inside a separate phase (usually its saturated solution or vapor). Energy minimization arguments are used to show that certain crystal planes are preferred over others, giving the crystal its shape. It is of fundamental importance in a number of areas ranging from the shape of nanoparticles and precipitates to nucleation . It also has more applied relevance in areas such as the shapes of active particles in heterogeneous catalysis .
In 1878 Josiah Willard Gibbs proposed [ 1 ] that a droplet or crystal will arrange itself such that its surface Gibbs free energy is minimized by assuming a shape of low surface energy . He defined the quantity
Here γ j {\displaystyle \gamma _{j}} represents the surface (Gibbs free) energy per unit area of the j {\displaystyle j} th crystal face and O j {\displaystyle O_{j}} is the area of said face. Δ G i {\displaystyle \Delta G_{i}} represents the difference in energy between a real crystal composed of i {\displaystyle i} molecules with a surface and a similar configuration of i {\displaystyle i} molecules located inside an infinitely large crystal. This quantity is therefore the energy associated with the surface. The equilibrium shape of the crystal will then be that which minimizes the value of Δ G i {\displaystyle \Delta G_{i}} .
In 1901 Russian scientist George Wulff stated [ 2 ] (without proof) that the length of a vector drawn normal to a crystal face h j {\displaystyle h_{j}} will be proportional to its surface energy γ j {\displaystyle \gamma _{j}} : h j = λ γ j {\displaystyle h_{j}=\lambda \gamma _{j}} . The vector h j {\displaystyle h_{j}} is the "height" of the j {\displaystyle j} th face, drawn from the center of the crystal to the face; for a spherical crystal this is simply the radius. This is known as the Gibbs-Wulff theorem.
In 1943 Laue gave a simple proof, [ 3 ] with a more complete version given shortly after by Dinghas. [ 4 ] The method was extended to include curved surfaces in 1953 by Herring with a different proof of the theorem [ 5 ] which has been generalized with existence proofs by others such as the work of Johnson and Chakerian. [ 6 ] Herring gave a method for determining the equilibrium shape of a crystal, consisting of two main exercises. [ 5 ] To begin, a polar plot of surface energy as a function of orientation is made. This is known as the gamma plot and is usually denoted as γ ( n ^ ) {\displaystyle \gamma ({\hat {n}})} , where n ^ {\displaystyle {\hat {n}}} denotes the surface normal, e.g., a particular crystal face. The second part is the Wulff construction itself in which the gamma plot is used to determine graphically which crystal faces will be present. It can be determined graphically by drawing lines from the origin to every point on the gamma plot. A plane perpendicular to the normal n ^ {\displaystyle {\hat {n}}} is drawn at each point where it intersects the gamma plot. The inner envelope of these planes forms the equilibrium shape of the crystal.
The Wulff construction is for the equilibrium shape, but there is a corresponding form called the "kinetic Wulff construction" where the surface energy is replaced by a growth velocity. There are also variants that can be used for particles on surfaces and with twin boundaries. [ 7 ]
Various proofs of the theorem have been given by Hilton, Liebman, Laue , [ 3 ] Herring, [ 5 ] and a rather extensive treatment by Cerf. [ 8 ] The following is after the method of R. F. Strickland-Constable. [ 9 ] We begin with the surface energy for a crystal
which is the product of the surface energy per unit area times the area of each face, summed over all faces. This is minimized for a given volume when
Surface free energy, being an intensive property , does not vary with volume. We then consider a small change in shape for a constant volume. If a crystal were nucleated to a thermodynamically unstable state, then the change it would undergo afterward to approach an equilibrium shape would be under the condition of constant volume. By definition of holding a variable constant, the change must be zero, δ ( V c ) V c = 0 {\displaystyle \delta (V_{c})_{V_{c}}=0} . Then by expanding V c {\displaystyle V_{c}} in terms of the surface areas O j {\displaystyle O_{j}} and heights h j {\displaystyle h_{j}} of the crystal faces, one obtains
which can be written, by applying the product rule , as
The second term must be zero, that is,
O 1 δ ( h 1 ) V c + O 2 δ ( h 2 ) V c + … = 0 {\displaystyle O_{1}\delta (h_{1})_{V_{c}}+O_{2}\delta (h_{2})_{V_{c}}+\ldots =0}
This is because, if the volume is to remain constant, the changes in the heights of the various faces must be such that when multiplied by their surface areas the sum is zero. If there were only two surfaces with appreciable area, as in a pancake-like crystal, then O 1 / O 2 = − δ ( h 1 ) V c / δ ( h 2 ) V c {\displaystyle O_{1}/O_{2}=-\delta (h_{1})_{V_{c}}/\delta (h_{2})_{V_{c}}} . In the pancake instance, O 1 = O 2 {\displaystyle O_{1}=O_{2}} on premise. Then by the condition, δ ( h 1 ) V c = − δ ( h 2 ) V c {\displaystyle \delta (h_{1})_{V_{c}}=-\delta (h_{2})_{V_{c}}} . This is in agreement with a simple geometric argument considering the pancake to be a cylinder with very small aspect ratio . The general result is taken here without proof. This result imposes that the remaining sum also equal 0,
Again, the surface energy minimization condition is that
These may be combined, employing a constant of proportionality λ {\displaystyle \lambda } for generality, to yield
The change in shape δ ( O j ) V c {\displaystyle \delta (O_{j})_{V_{c}}} must be allowed to be arbitrary, which then requires that h j = λ γ j {\displaystyle h_{j}=\lambda \gamma _{j}} , which then proves the Gibbs-Wulff Theorem.
|
https://en.wikipedia.org/wiki/Wulff_construction
|
The Wulff–Dötz reaction (also known as the Dötz reaction or the benzannulation reaction of the Fischer carbene complexes) is the chemical reaction of an aromatic or vinylic alkoxy pentacarbonyl chromium carbene complex with an alkyne and carbon monoxide to give a Cr(CO) 3 -coordinated substituted phenol . [ 1 ] [ 2 ] [ 3 ] Several reviews have been published. [ 4 ] [ 5 ] It is named after the German chemist Karl Heinz Dötz (b. 1943) and the American chemist William D. Wulff (b. 1949) at Michigan State University. [ 6 ] The reaction was first discovered by Karl Dötz and was extensively developed by his group and W. Wulff's group. They subsequently share the name of the reaction.
The position of the substituents is highly predictable with the largest alkyne substituent (R L ) neighboring the phenol and the smallest alkyne substituent (R S ) neighboring the methoxy group. [ 7 ] [ 8 ] Hence, this reaction is more useful for terminal alkynes than internal alkynes.
The phenol can be liberated from the chromium complex by a mild oxidation , such as ceric ammonium nitrate or air oxidation.
Since this reaction can quickly generate complex phenolic compounds, the Wulff–Dötz reaction has been used most often in the synthesis of natural products , especially Vitamins E and K . [ 9 ] [ 10 ] It is also applicable to the synthesis of polyphenolic compounds, such as calixarenes . [ 11 ]
The mechanism is thought to begin with the loss of carbon monoxide from the Fischer carbene complex 1 to give intermediate 3 . The loss of CO is rate limiting making the investigation of this reaction mechanism difficult, since all subsequent steps occur rapidly. The alkyne then coordinates to the metal center, a low-energy barrier process. The resulting alkyne complex rearranges to intermediate 4 . [ 12 ] The η 1 , η 3 -complex shown as 4 subsequently undergoes CO insertion to give the η 4 -vinylketene complex 5 , which undergoes electrocyclization to give intermediate 6 . When R 1 is hydrogen, intermediate 6 is short lived and proceeds to the metal tricarbonyl arene complex 2 . Without CO insertion, the reaction proceeds through 7 to the cyclopentadiene product 8 .
Exposing Fischer carbene with alkenyl side chain to an alkyne gives a highly substituted phenol. The phenolic carbon is originated from the CO ligand. The α,β-unsaturated part could also be from an electron rich aryl system, yielding a polycyclic aromatic system. This reaction was first discovered by Karl Dötz and was extensively developed by his group, thus giving the name Dötz reaction. It is sometimes called Wuff-Dötz reaction because William Wuff's group at Michigan State University also extensively contributed to the development of this reaction. [ 13 ]
The half-sandwich complex in the Dötz reaction can be demetallated to give corresponding aryl product, or it could be further employed for a nucleophilic addition to aromatic system strategy for synthesis of fully-substituted benzene ring. [ 14 ]
The Dötz reaction has been employed in the syntheses of natural products, as illustrated below. [ 15 ] [ 16 ]
In several cases, if the reactivity of the reagent does not meet or the conditions for Dotz mechanism to operate are not fulfilled, products derived from the interrupted Dotz reaction could be dominant.
For instance, if the substituents on alkyne are too bulky, cyclobutene product would be observed instead. [ 17 ]
If the alkyne partner bearing a ketone substituent and both R and R’ are not bulky enough, a favored conformation for an 8e pi cyclization could be dominant leading to a fused bicyclic lactone system. [ 18 ] [ 19 ] [ 20 ]
Alkene or nucleophilic moiety on the side chain of alkyne partner could trap the resulting ketene through a [2+2] cycloaddition or nucleophilic addition respectively. This strategy was applied for the syntheses of blastmycinone and antimycinone. [ 21 ] [ 22 ]
Fischer carbenes with an α-hydrogen could form could give cyclopentenone product similar to Pauson-Khand reaction. This is presumably because of a β-hydride elimination and reinsertion process. [ 23 ]
If the alkene moiety is present in Fischer carbene, but not in conjugation, cyclopropanation could be observed. The strategy was employed in a formal synthesis of carabrone. [ 24 ] [ 25 ]
|
https://en.wikipedia.org/wiki/Wulff–Dötz_reaction
|
In organic chemistry , the Wurtz reaction , named after Charles Adolphe Wurtz , is a coupling reaction in which two alkyl halides are treated with sodium metal to form a higher alkane.
The reaction is of little value except for intramolecular versions, such as 1,6-dibromohexane + 2 Na → cyclohexane + 2 NaBr.
A related reaction, which combines alkyl halides with aryl halides is called the Wurtz–Fittig reaction . Despite its very modest utility, the Wurtz reaction is widely cited as representative of reductive coupling . [ 1 ]
The reaction proceeds by an initial metal–halogen exchange , which is described with the following idealized stoichiometry:
This step may involve the intermediacy of radical species R·. The conversion resembles the formation of a Grignard reagent . The RM intermediates have been isolated in several cases. The radical is susceptible to diverse reactions. The organometallic intermediate (RM) next reacts with the alkyl halide (RX) forming a new carbon–carbon covalent bond.
The process resembles an S N 2 reaction , but the mechanism is probably complex.
The reaction is intolerant of many functional groups which would be attacked by sodium. For similar reasons, the reaction is conducted in unreactive polar aprotic solvents such as ether , dimethylformamide(DMF) or tetrahydrofuran (THF). In efforts to improve the reaction yields, other metals have also been tested to effect the Wurtz-like couplings: silver , zinc , iron , activated copper , indium , as well as mixture of manganese and copper chloride .
Wurtz coupling is useful in closing small, especially three-membered, rings. In the cases of 1,3-, 1,4-, 1,5-, and 1,6- dihalides, Wurtz-reaction conditions lead to formation of cyclic products, although yields are variable. Under Wurtz conditions, vicinal dihalides yield alkenes, whereas geminal dihalides convert to alkynes. Bicyclobutane was prepared this way from 1-bromo-3-chlorocyclobutane in 95% yield. The reaction is conducted in refluxing dioxane, at which temperature, the sodium is liquid. [ 2 ]
Although the Wurtz reaction is only of limited value in organic synthesis , analogous couplings are useful for coupling main group halides. Hexamethyldisilane arises efficiently by treatment of trimethylsilyl chloride with sodium:
Tetraphenyldiphosphine is prepared analogously:
Similar couplings have been applied to many main group halides. When applied to main group di halides, rings and polymers result. Polysilanes and polystannanes are produced in this way [ 3 ]
|
https://en.wikipedia.org/wiki/Wurtz_reaction
|
The Wurtz–Fittig reaction is the chemical reaction of an aryl halide , alkyl halides , and sodium metal to give substituted aromatic compounds. [ 1 ] Following the work of Charles Adolphe Wurtz on the sodium-induced coupling of alkyl halides (the Wurtz reaction ), Wilhelm Rudolph Fittig extended the approach to the coupling of an alkyl halide with an aryl halide. [ 2 ] [ 3 ] This modification of the Wurtz reaction is considered a separate process and is named for both scientists. [ 1 ]
The reaction works best for forming asymmetrical products if the halide reactants are somehow separate in their relative chemical reactivities . One way to accomplish this is to form the reactants with halogens of different periods . Typically the alkyl halide is made more reactive than the aryl halide, increasing the probability that the alkyl halide will form the organosodium bond first and thus act more effectively as a nucleophile toward the aryl halide. [ 4 ] Typically the reaction is used for the alkylation of aryl halides. With the use of ultrasound sodium reacts with some aryl halides to produce biphenyl compounds. [ 5 ]
The mechanism of the Wurtz–Fittig reaction has not been the subject of modern investigations. The process was once proposed to involve the combination of an alkyl and aryl radicals. [ 6 ] [ 7 ] Another mechanistic proposal invoked the generation of organosodium intermediates. [ 8 ] The reaction of sodium and chlorobenzene produces triphenylene , which supports a role for radicals. [ 8 ] A role for organosodium compounds is supported by indirect evidence. [ 7 ] [ 6 ] For example, addition of carbon dioxide to a mixture of sodium and isobutyl bromide results in the formation of 3-methylbutanoic acid after acid workup. [ 9 ] [ 10 ]
The Wurtz–Fittig reaction can be conducted using metals other than sodium. Some examples include potassium, iron, copper, and lithium. [ 11 ] When lithium is used, the reaction occurs with appreciable yield only under ultrasound. [ 12 ] Ultrasound is known to cleave halogen atoms from aryl and alkyl halides through a free-radical mechanism [ 13 ]
The Wurtz–Fittig reaction has limited applicability, because it is plagued by side reactions including rearrangements and eliminations. [ 11 ] The reaction has been applied to the laboratory synthesis of some organosilicon compounds. [ 14 ] One example is the conversion of tetraethyl orthosilicate to the mono-tert-butoxy derivative in 40% yield as summarized in this idealized equation: [ 15 ]
Molten sodium was used.
Other organosilicon compounds synthesized using the Wurtz–Fittig reaction include silylated calixarenes [ 16 ] and vinylsilanes . [ 17 ]
|
https://en.wikipedia.org/wiki/Wurtz–Fittig_reaction
|
In topology and high energy physics , the Wu–Yang dictionary refers to the mathematical identification that allows back-and-forth translation between the concepts of gauge theory and those of differential geometry . The dictionary appeared in 1975 in an article by Tai Tsun Wu and C. N. Yang comparing electromagnetism and fiber bundle theory. [ 1 ] This dictionary has been credited as bringing mathematics and theoretical physics closer together. [ 2 ]
A crucial example of the success of the dictionary is that it allowed the understanding of monopole quantization in terms of Hopf fibrations . [ 3 ] [ 4 ]
Equivalences between fiber bundle theory and gauge theory were hinted at the end of the 1960s. In 1967, mathematician Andrzej Trautman started a series of lectures aimed at physicists and mathematicians at King's College London regarding these connections. [ 4 ]
Theoretical physicists Tai Tsun Wu and C. N. Yang working in Stony Brook University , published a paper in 1975 on the mathematical framework of electromagnetism and the Aharonov–Bohm effect in terms of fiber bundles. A year later, mathematician Isadore Singer came to visit and brought a copy back to the University of Oxford . [ 2 ] [ 5 ] [ 6 ] Singer showed the paper to Michael Atiyah and other mathematicians, sparking a close collaboration between physicists and mathematicians. [ 2 ]
Yang also recounts a conversation that he had with one of the mathematicians that founded fiber bundle theory, Shiing-Shen Chern : [ 2 ]
In 1975, impressed with the fact that gauge fields are connections on fiber bundles, I drove to the house of Shiing-Shen Chern in El Cerrito , near Berkeley . (I had taken courses with him in the early 1940s when he was a young professor and I an undergraduate student at the National Southwest Associated University in Kunming , China . That was before fiber bundles had
become important in differential geometry and before Chern had made history with his contributions to the generalized Gauss–Bonnet theorem and the Chern classes .) We had much to talk about: friends, relatives, China. When our conversation turned to fiber bundles, I told him that I had finally learned from Jim Simons the beauty of fiber-bundle theory and the profound Chern-Weil theorem . I said I found it amazing that gauge fields are exactly connections on fiber bundles, which the mathematicians developed without reference to the physical world. I added ‘this is both thrilling and puzzling, since you mathematicians dreamed up these concepts out of nowhere.’ He immediately protested, ‘No, no. These concepts were not dreamed up. They were natural and real.'
In 1977, Trautman used these results to demonstrate an equivalence between a quantization condition for magnetic monopoles used by Paul Dirac back in 1931 and Hopf fibration , a fibration of a 3-sphere proposed io the same year by mathematician Heinz Hopf . [ 4 ] Mathematician Jim Simons discussing this equivalence with Yang expressed that “Dirac had discovered trivial and nontrivial bundles before mathematicians.” [ 4 ]
In the original paper, Wu and Yang added sources (like the electric current ) to the dictionary next to a blank spot, indicating a lack of any equivalent concept on the mathematical side. During interviews, Yang recalls that Singer and Atiyah found great interest in this concept of sources, which was unknown for mathematicians but that physicists knew since the 19th century. Mathematicians started working on that, which lead to the development of Donaldson theory by Simon Donaldson , a student of Atiyah. [ 7 ] [ 8 ]
The Wu-Yang dictionary relates terms in particle physics with terms in mathematics, specifically fiber bundle theory. Many versions and generalization of the dictionary exist. Here is an example of a dictionary, which puts each physics term next to its mathematical analogue: [ 9 ]
Wu and Yang considered the description of an electron traveling around a cylinder in the presence of a magnetic field inside the cylinder (outside the cylinder the field vanishes i.e. f μ ν = 0 {\displaystyle f_{\mu \nu }=0} ). According to the Aharonov–Bohm effect , the interference patterns shift by a factor exp ( − i Ω / Ω 0 ) {\displaystyle \exp(-i\Omega /\Omega _{0})} , where Ω {\displaystyle \Omega } is the magnetic flux and Ω 0 {\displaystyle \Omega _{0}} is the magnetic flux quantum . For two different fluxes a and b , the results are identical if Ω a − Ω b = N Ω 0 {\displaystyle \Omega _{a}-\Omega _{b}=N\Omega _{0}} , where N {\displaystyle N} is an integer. We define the operator S a b {\displaystyle S_{ab}} as the gauge transformation that brings the electron wave function from one configuration to the other ψ b = S b a ψ a {\displaystyle \psi _{b}=S_{ba}\psi _{a}} . For an electron that takes a path from point P to point Q , we define the phase factor as
where A μ {\displaystyle A_{\mu }} is the electromagnetic four-potential . For the case of a SU 2 gauge field, we can make the substitution
where X k = − i σ k / 2 {\displaystyle X_{k}=-i\sigma _{k}/2} are the generators of SU 2 , σ k {\displaystyle \sigma _{k}} are the Pauli matrices . Under these concepts, Wu and Yang showed the relation between the language of gauge theory and fiber bundles , was codified in following dictionary: [ 2 ] [ 10 ] [ 11 ]
|
https://en.wikipedia.org/wiki/Wu–Yang_dictionary
|
In crystallography , a Wyckoff position is any point in a set of points whose site symmetry groups (see below) are all conjugate subgroups one of another. [ 1 ] Crystallography tables give the Wyckoff positions for different space groups . [ 2 ]
The Wyckoff positions are named after Ralph Wyckoff , an American X-ray crystallographer who authored several books in the field. His 1922 book, The Analytical Expression of the Results of the Theory of Space Groups, [ 3 ] contained tables with the positional coordinates, both general and special, permitted by the symmetry elements. This book was the forerunner of International Tables for X-ray Crystallography, which first appeared in 1935.
For any point in a unit cell , given by fractional coordinates , one can apply a symmetry operation to the point. In some cases it will move to new coordinates, while in other cases the point will remain unaffected. For example, reflecting across a mirror plane will switch all the points left and right of the mirror plane, but points exactly on the mirror plane itself will not move. We can test every symmetry operation in the crystal's point group and keep track of whether the specified point is invariant under the operation or not. The (finite) list of all symmetry operations which leave the given point invariant taken together make up another group, which is known as the site symmetry group of that point. [ 4 ] By definition, all points with the same site symmetry group, or a conjugate site symmetry group, are assigned the same Wyckoff position.
The Wyckoff positions are designated by a letter, often preceded by the number of positions that are equivalent to a given position with that letter, in other words the number of positions in the unit cell to which the given position is moved by applying all the elements of the space group. For instance, 2a designates the positions left where they are by a certain subgroup, and indicates that other symmetry elements move the point to a second position in the unit cell. The letters are assigned in alphabetical order with earlier letters indicating positions with fewer equivalent positions, or in other words with larger site symmetry groups. [ 5 ] Some designations may apply to a finite number of points per unit cell (such as inversion points , improper rotation points, and intersections of rotation axes with mirror planes or other rotation axes), but other designations apply to infinite sets of points (such as generic points on rotation axes, screw axes , mirror planes, and glide planes , as well as general points not lying on any symmetry axis or plane).
Wyckoff positions are used in calculations of crystal properties. There are two types of positions: general and special.
General positions have a site symmetry of the trivial group and all correspond to the same Wyckoff position. Special positions have a non-trivial site symmetry group.
For a particular space group, one can check the Wyckoff positions using International Tables of Crystallography. [ 6 ] The table presents the multiplicity, Wyckoff letter and site symmetry for Wyckoff positions.
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Wyckoff_positions
|
The Wyman-Gordon 50,000-ton forging press is a forging press located at the Wyman-Gordon Grafton Plant that was built as part of the Heavy Press Program by the United States Air Force . It was manufactured by Loewy Hydropress of Pittsburgh, Pennsylvania and began operation in October, 1955. [ 1 ]
|
https://en.wikipedia.org/wiki/Wyman-Gordon_50,000_ton_forging_press
|
The Wyss [ 1 ] Center is a not-for-profit neurotechnology research foundation in Geneva, Switzerland. [ 2 ]
The center was founded by Hansjörg Wyss , who previously created the Wyss Institute for Biologically Inspired Engineering in the United States. The founding director of the Wyss Center was neuroscientist Professor John P. Donoghue , who is best known for his work on human brain computer interfaces, [ 3 ] brain function and plasticity. The mission [ 4 ] of the Wyss Center is to advance understanding of the brain to realize therapies and improve lives.
The center is based at Campus Biotech (in the former Merck Serono building) located in Geneva , Switzerland. The CEO of the Wyss Center is [ 5 ] Erwin Böttinger, who assumed responsibility on 1 April 2023.
The Wyss Center works in the areas of neurobiology, neuroimaging and neurotechnology [ 6 ] to develop clinical solutions from neuroscience research. [ 7 ] [ 8 ]
|
https://en.wikipedia.org/wiki/Wyss_Center_for_Bio_and_Neuroengineering
|
In geometry , the Wythoff symbol is a notation representing a Wythoff construction of a uniform polyhedron or plane tiling within a Schwarz triangle . It was first used by Coxeter , Longuet-Higgins and Miller in their enumeration of the uniform polyhedra. Later the Coxeter diagram was developed to mark uniform polytopes and honeycombs in n-dimensional space within a fundamental simplex.
A Wythoff symbol consists of three numbers and a vertical bar. It represents one uniform polyhedron or tiling, although the same tiling/polyhedron can have different Wythoff symbols from different symmetry generators. For example, the regular cube can be represented by 3 | 2 4 with O h symmetry , and 2 4 | 2 as a square prism with 2 colors and D 4h symmetry , as well as 2 2 2 | with 3 colors and D 2h symmetry.
With a slight extension, Wythoff's symbol can be applied to all uniform polyhedra. However, the construction methods do not lead to all uniform tilings in Euclidean or hyperbolic space.
The Wythoff construction begins by choosing a generator point on a fundamental triangle. This point must be chosen at equal distance from all edges that it does not lie on, and a perpendicular line is then dropped from it to each such edge.
The three numbers in Wythoff's symbol, p , q , and r , represent the corners of the Schwarz triangle used in the construction, which are π / p , π / q , and π / r radians respectively. The triangle is also represented with the same numbers, written ( p q r ). The vertical bar in the symbol specifies a categorical position of the generator point within the fundamental triangle according to the following:
In this notation the mirrors are labeled by the reflection-order of the opposite vertex. The p , q , r values are listed before the bar if the corresponding mirror is active.
A special use is the symbol | p q r which is designated for the case where all mirrors are active, but odd-numbered reflected images are ignored. The resulting figure has rotational symmetry only.
The generator point can either be on or off each mirror, activated or not. This distinction creates 8 (2 3 ) possible forms, but the one where the generator point is on all the mirrors is impossible. The symbol that would normally refer to that is reused for the snub tilings.
The Wythoff symbol is functionally similar to the more general Coxeter-Dynkin diagram , in which each node represents a mirror and the arcs between them – marked with numbers – the angles between the mirrors. (An arc representing a right angle is omitted.) A node is circled if the generator point is not on the mirror.
The fundamental triangles are drawn in alternating colors as mirror images. The sequence of triangles ( p 3 2) change from spherical ( p = 3, 4, 5), to Euclidean ( p = 6), to hyperbolic ( p ≥ 7). Hyperbolic tilings are shown as a Poincaré disk projection.
|
https://en.wikipedia.org/wiki/Wythoff_symbol
|
The Wöhler synthesis is the conversion of ammonium cyanate into urea . This chemical reaction was described in 1828 by Friedrich Wöhler . [ 1 ] It is often cited as the starting point of modern organic chemistry . Although the Wöhler reaction concerns the conversion of ammonium cyanate, this salt appears only as an (unstable) intermediate. Wöhler demonstrated the reaction in his original publication with different sets of reactants: a combination of cyanic acid and ammonia , a combination of silver cyanate and ammonium chloride , a combination of lead cyanate and ammonia and finally from a combination of mercury cyanate and cyanatic ammonia (which is again cyanic acid with ammonia). [ 2 ]
The reaction can be demonstrated by starting with solutions of potassium cyanate and ammonium chloride which are mixed, heated and cooled again. An additional proof of the chemical transformation is obtained by adding a solution of oxalic acid which forms urea oxalate as a white precipitate . [ 3 ]
Alternatively the reaction can be carried out with lead cyanate and ammonia. [ 4 ] The actual reaction taking place is a double displacement reaction to form ammonium cyanate:
Ammonium cyanate decomposes to ammonia and cyanic acid which in turn react to produce urea:
Complexation with oxalic acid drives this chemical equilibrium to completion.
It is disputed that Wöhler's synthesis sparked the downfall of the theory of vitalism , which states that organic matter possessed a certain vital force common to all living things. Prior to the Wöhler synthesis, the work of John Dalton and Jöns Jacob Berzelius had already convinced chemists that organic and inorganic matter obey the same chemical laws. It took until 1845 when Kolbe reported another inorganic – organic conversion (of carbon disulfide to acetic acid ) before vitalism started to lose support. [ 5 ] [ 6 ] Wöhler also did not, as some textbooks have claimed, act as a "crusader" against vitalism. A 2000 survey by historian Peter Ramberg found that 90% of chemical textbooks repeat some version of the Wöhler myth. [ 7 ]
|
https://en.wikipedia.org/wiki/Wöhler_synthesis
|
Wüstite ( Fe O , sometimes also written as Fe 0.95 O) is a mineral form of mostly iron(II) oxide found with meteorites and native iron . It has a grey colour with a greenish tint in reflected light . Wüstite crystallizes in the isometric-hexoctahedral crystal system in opaque to translucent metallic grains. It has a Mohs hardness of 5 to 5.5 and a specific gravity of 5.88. Wüstite is a typical example of a non-stoichiometric compound .
Wüstite was named after Fritz Wüst (1860–1938), a German metallurgist and founding director of the Kaiser-Wilhelm-Institut für Eisenforschung (presently Max Planck Institute for Iron Research GmbH ). [ 2 ]
In addition to its type locality in Germany, it has been reported from Disko Island , Greenland; the Jharia coalfield, Jharkhand , India; and as inclusions in diamonds in a number of kimberlite pipes. It also is reported from deep sea manganese nodules .
Its presence indicates a highly reducing environment .
Iron minerals on the Earth's surface are typically richly oxidized, forming hematite , with Fe 3+ state, or in somewhat less oxidizing environments, magnetite , with a mixture of Fe 3+ and Fe 2+ . Wüstite, in geochemistry, defines a redox buffer of oxidation within rocks at which point the rock is so reduced that Fe 3+ , and thus hematite, is absent.
As the redox state of a rock is further reduced, magnetite is converted to wüstite. This occurs by conversion of the Fe 3+ ions in magnetite to Fe 2+ ions. An example reaction is presented below:
The formula for magnetite is more accurately written as FeO·Fe 2 O 3 than as Fe 3 O 4 . Magnetite is one part FeO and one part Fe 2 O 3 , rather than a solid solution of wüstite and hematite . Magnetite is termed a redox buffer because, until all Fe 3+ present in the system is converted to Fe 2+ , the oxide mineral assemblage of iron remains wüstite-magnetite. Furthermore, the redox state of the rock remains at the same level of oxygen fugacity [ clarification needed ] . Considering buffering the redox potential (E h ) in the Fe–O redox system, this can be compared to buffering the pH in the H + /OH − acid–base system of water.
Once the Fe 3+ is consumed, then oxygen must be stripped from the system to further reduce it and wüstite is converted to native iron. The oxide mineral equilibrium assemblage of the rock becomes wüstite–magnetite–iron.
In nature, the only natural systems which are chemically reduced enough to even attain a wüstite–magnetite composition are rare, including carbonate-rich skarns , meteorites, fulgurites and lightning-affected rock, and perhaps the mantle where reduced carbon is present, exemplified by the presence of diamond or graphite .
The ratio of Fe 2+ to Fe 3+ within a rock determines, in part, the silicate mineral assemblage of the rock. Within a rock of a given chemical composition, iron enters minerals based on the bulk chemical composition and the mineral phases which are stable at that temperature and pressure. Iron may only enter minerals such as pyroxene and olivine if it is present as Fe 2+ ; Fe 3+ cannot enter the lattice of fayalite olivine and thus for every two Fe 3+ ions, one Fe 2+ is used and one molecule of magnetite is created.
In chemically reduced rocks, magnetite may be absent due to the propensity of iron to enter olivine, and wüstite may only be present if there is an excess of iron above what can be used by silica. Thus, wüstite may only be found in silica-undersaturated compositions which are also heavily chemically reduced, satisfying both the need to remove all Fe 3+ and to maintain iron outside of silicate minerals.
In nature, carbonate rocks, potentially carbonatite , kimberlites , carbonate-bearing melilitic rocks, and other rare alkaline rocks may satisfy these criteria. However, wüstite is not reported in most of these rocks in nature, potentially because the redox state necessary to drive magnetite to wüstite is so rare.
Approximately 2–3% of the world's energy budget is allocated to the Haber process for ammonia ( NH 3 ) production, which relies on wüstite-derived catalysts. The industrial catalyst is derived from finely ground iron powder, which is usually obtained by reduction of high-purity magnetite (Fe 3 O 4 ). The pulverized iron metal is burnt (oxidized) to give magnetite or wüstite of a defined particle size. The magnetite (or wüstite) particles are then partially reduced, removing some of the oxygen in the process. The resulting catalyst particles consist of a core of magnetite, encased in a shell of wüstite, which in turn is surrounded by an outer shell of iron metal. The catalyst maintains most of its bulk volume during the reduction, resulting in a highly porous high-surface-area material, which enhances its effectiveness as a catalyst. [ 3 ] [ 4 ]
According to Vagn Fabritius Buchwald, wüstite was an important component during the Iron Age to facilitate the process of forge welding . In ancient times, when blacksmithing was performed using a charcoal forge , the deep charcoal pit in which the steel or iron was placed provided a highly reducing, virtually oxygen-free environment, producing a thin wüstite layer on the metal. At the welding temperature, the iron becomes highly reactive with oxygen, and will spark and form thick layers of slag when exposed to the air, which makes welding the iron or steel nearly impossible. To solve this problem, ancient blacksmiths would toss small amounts of sand onto the white-hot metal. The silica in the sand reacts with the wüstite to form fayalite , which melts just below the welding temperature. This produced an effective flux that shielded the metal from oxygen and helped extract oxides and impurities, leaving a pure surface that can weld readily. Although the ancients had no knowledge of how this worked, the ability to weld iron contributed to the movement out of the Bronze Age and into the modern. [ 5 ]
Wüstite forms a solid solution with periclase ( Mg O), and iron substitutes for magnesium. Periclase, when hydrated, forms brucite (Mg(O H ) 2 ), a common product of serpentinite metamorphic reactions .
Oxidation and hydration of wüstite forms goethite and limonite .
Zinc, aluminium, and other metals may substitute for iron in wüstite.
Wüstite in dolomite skarns may be related to siderite (iron(II) carbonate), wollastonite , enstatite , diopside , and magnesite .
|
https://en.wikipedia.org/wiki/Wüstite
|
Włodzimierz Kołos (1928 - 1996) was a Polish chemist and physicist who was one of the founders of modern quantum chemistry , and pioneered accurate calculations on the electronic structure of molecules. [ 1 ]
Kołos was born on September 6, 1928, in Pinsk . He received his M.Sc. in chemistry in 1950 and began his academic career as an organic chemist. However, he was soon attracted to theoretical physics. He began his graduate studies in theoretical physics in 1951 and completed his thesis in only two years. The University of Warsaw and the Polish Chemical Society award the Kołos Medal every two years to commemorate his life and career.
Kołos is best known for his work on the theory of electron correlation in molecules. In 1958 he went the University of Chicago , at a time when powerful computers were first becoming available for scientific work. He developed a new computer program to solve the Schrödinger equation for the hydrogen molecule to unprecedented accuracy. In the early 1960s, Kołos and Wolniewicz published a number of pioneering papers on the potential energy curves of the hydrogen molecule, including several corrections to the Born–Oppenheimer approximation , including adiabatic, non-adiabatic, and relativistic terms. One result attracted particular attention [ citation needed ] : the calculated dissociation energy disagreed with the best experimental data then available, from Gerhard Herzberg ’s group. A few years later Herzberg improved his experiment and obtained a new result that agreed with the theoretical prediction. This was the first time that quantum mechanical calculations on a molecule had proved more accurate than the best experiments [ citation needed ] . Herzberg himself emphasized the importance of this in his Nobel Prize lecture.
Kołos established a strong research group in molecular quantum chemistry in Warsaw , and made many other important contributions, particularly in the field of intermolecular forces. He made important contributions to the development of the symmetry-adapted perturbation theory of intermolecular forces and carried out pioneering studies on the nonadditivity of intermolecular forces. He was a member of the Polish Academy of Sciences , the International Academy of Quantum Molecular Science and the Academia Europaea .
|
https://en.wikipedia.org/wiki/Włodzimierz_Kołos
|
Włodzimierz Kuperberg (born January 19, 1941) is a professor of mathematics at Auburn University , with research interests in geometry and topology .
Although Kuperberg is Polish-American , he was born in what is now Belarus , where his parents and older siblings had traveled east to escape World War II . In 1946, the family returned to Poland, resettling in Szczecin , where Kuperberg grew up. He began his studies at the University of Warsaw in 1959, and received his Ph.D. from the same institution in 1969, under the supervision of Karol Borsuk . During his time at Warsaw, he published three high school textbooks in Polish. Kuperberg left Poland due to the anti-semitic aspects of the 1967- 1968 Polish political crisis , and worked at Stockholm University until 1972, when he assumed a visiting position at the University of Houston . In 1974, Kuperberg took a position at Auburn where he remains.
Kuperberg married mathematician Krystyna Kuperberg in 1964, and their son Greg Kuperberg is also a professional mathematician, while their daughter Anna Kuperberg is a photographer.
Although much of Kuperberg's early mathematical work is in topology , he is best known today for his work in geometry , and in particular on packing and covering problems . His first paper in this area (1982) showed that the ratio of packing density to covering density of any convex body in the plane is at least 3/4. His 1990 paper on double lattices with his son Greg provides the best lower bound known at that time for packing densities of arbitrary two-dimensional convex bodies; with Bezdek (1990) he calculated the exact packing density of the infinite cylinder , which prior to Hales' 1998 solution of the Kepler conjecture was the first nontrivial calculation of the packing density of any three-dimensional convex body.
As a high school student, Kuperberg won first prize in the 10th Polish Mathematical Olympiad, leading him to enroll in mathematics when he began his college studies. While at the University of Warsaw he received both the university's Excellence in Teaching and Research Award and the Polish Mathematical Society Award for Young Mathematicians. He was honored again at Auburn by a five-year Alumni Professor chairship in 1996, and again by an Erdős Professorship in 1999, which he used to visit the Hungarian Academy of Sciences in Budapest . In 2003, his colleagues presented him with a festschrift .
|
https://en.wikipedia.org/wiki/Włodzimierz_Kuperberg
|
X-PLOR is a computer software package for computational structural biology originally developed by Axel T. Brunger at Yale University . It was first published in 1987 as an offshoot of CHARMM - a similar program that ran on supercomputers made by Cray Inc. It is used in the fields of X-ray crystallography and nuclear magnetic resonance spectroscopy of proteins (NMR) analysis. [ 1 ]
X-PLOR is a highly sophisticated program that provides an interface between theoretical foundations and experimental data in structural biology, with specific emphasis on X-ray crystallography and nuclear magnetic resonance spectroscopy in solution of biological macro-molecules. It is intended mainly for researchers and students in the fields of computational chemistry , structural biology , and computational molecular biology.
This article about molecular modelling software is a stub . You can help Wikipedia by expanding it .
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/X-PLOR
|
X-Ray is a reference tool, introduced in September 2011, [ 1 ] that is incorporated in the Amazon Kindle Touch and later models, Kindle apps for mobile platforms, Amazon Fire tablets, Fire TVs and Amazon Prime Video streaming apps, and the discontinued Fire Phone . On the Kindle, general reference information is preloaded into a small file on the Kindle device or app, so that when the feature is used, there is no need to access the Internet to refer to such content as dictionary and encyclopedic information, metadata, or biographical info about actors featured in a film. [ 1 ]
X-Ray is a tool to explore the contents of books or other media in more depth. As Amazon describes X-Ray for Kindle: "X-Ray lets you explore the 'bones of a book.' You can also view more detailed information from Wikipedia and from Shelfari , Amazon's community-powered encyclopedia for book lovers." [ 2 ] After Shelfari closed in 2016, information from Goodreads was displayed in the X-Ray tool. [ 3 ]
X-Ray operates like a concordance , listing most commonly used character names, locations, themes, or ideas, which are sorted into the two main categories of "People" and "Terms". For example, readers can use it to look up the first occurrence of characters, which is often helpful in many-charactered novels. [ 4 ]
This software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/X-Ray_(Amazon)
|
The X-Ray Imaging and Spectroscopy Mission ( XRISM , pronounced 'crism' [ 3 ] or 'krizz-em', [ 4 ] as if the X was a chi ), is an X-ray space telescope . It is a mission of the Japan Aerospace Exploration Agency (JAXA) in partnership with NASA and ESA , intended to study galaxy clusters , outflows from galaxy nuclei , and dark matter . [ 5 ] [ 6 ]
XRISM is a next generation X-ray astronomy spacecraft, succeeding the Chandra X-ray Observatory and XMM-Newton . [ 2 ] [ 7 ] XRISM is intended to fill a gap in observational capabilities between the anticipated retirement of those older X-ray telescopes and the future launch of the planned Advanced Telescope for High Energy Astrophysics (ATHENA). The Hitomi X-ray telescope was intended to fill that gap, but destroyed itself a few weeks after launch in 2016. [ 2 ] [ 7 ] XRISM replaces Hitomi's role of filling the expected observational gap.
During its early design phase, XRISM was known as the " ASTRO-H Successor " or " ASTRO-H2 ". After the loss of Hitomi, the name X-ray Astronomy Recovery Mission ( XARM ) was used, the R in the acronym referring to recovering Hitomi's capabilities. The name was changed to XRISM in 2018 when JAXA formally initiated the project team. [ 8 ]
With the retirement of Suzaku in September 2015, and the detectors onboard Chandra X-ray Observatory and XMM-Newton operating for more than 15 years and gradually aging, the failure of Hitomi meant that X-ray astronomers would have a 13-year blank period in soft X-ray observation, until the launch of ATHENA in 2035. [ Note 1 ] [ 2 ] [ 7 ] [ 9 ] This would result in a major setback for the international community, [ 10 ] as studies performed by large scale observatories in other wavelengths, such as the James Webb Space Telescope and the Thirty Meter Telescope will commence in the early 2020s, while there would be no telescope to cover the most important part of X-ray astronomy. [ 2 ] [ 7 ] A lack of new missions could also deprive young astronomers a chance to gain hands-on experience from participating in a project. [ 2 ] [ 7 ] Along with these reasons, motivation to recover science that was expected as results from Hitomi , became the rationale to initiate the XRISM project. XRISM has been recommended by ISAS's Advisory Council for Research and Management, the High Energy AstroPhysics Association in Japan, NASA Astrophysics Subcommittee, NASA Science Committee, NASA Advisory Council. [ 7 ] [ 11 ]
With its successful launch in September 2023, [ 1 ] XRISM is expected to cover the science that was lost with Hitomi , such as the structure formation of the universe, feedback from galaxies/active galaxy nuclei, and the history of material circulation from stars to galaxy clusters. [ 6 ] The space telescope will also take over Hitomi 's role as a technology demonstrator for the European Advanced Telescope for High Energy Astrophysics (ATHENA) telescope. [ 9 ] [ 12 ] [ 13 ] Multiple space agencies, including NASA and the European Space Agency (ESA) are participating in the mission. [ 14 ] In Japan, the project is led by JAXA's Institute of Space and Astronautical Science (ISAS) division, and U.S. participation is led by NASA's Goddard Space Flight Center (GSFC). The U.S. contribution is expected to cost around US$80 million, which is about the same amount as the contribution to Hitomi . [ 15 ] [ 16 ]
The X-ray Imaging and Spectroscopy Mission will be one of the first projects for ISAS to have a separate project manager (PM) and primary investigator (PI). This measure was taken as part of ISAS's reform in project management to prevent the recurrence of the Hitomi accident. [ 7 ] In traditional ISAS missions, the PM was also responsible for tasks that would typically be allocated to PIs in a NASA mission.
While Hitomi had an array of instruments spanning from soft X-ray to soft gamma ray, XRISM will focus around the Resolve instrument (equivalent to Hitomi 's soft X-ray spectrometer), [ 17 ] as well as Xtend (SXI), which has a high affinity to Resolve. [ 18 ] The elimination of a hard X-ray telescope was justified by the 2012 launch of NASA's NuSTAR satellite, something that did not exist when Hitomi (then known as the New X-Ray Telescope, NeXT) was initially formulated. [ 19 ] [ Note 2 ] NuSTAR's spatial and energy resolution is analogous to Hitomi 's hard X-ray instruments. [ 19 ] Once XRISM 's operation starts, collaborative observations with NuSTAR will likely be essential. [ 6 ] Meanwhile, the scientific value of the soft and hard X-ray band width boundary has been noted; therefore the option of upgrading XRISM 's instruments to be partially capable of hard X-ray observation is under consideration. [ 18 ] [ 19 ] [ needs update ]
A hard X-ray telescope proposal with abilities surpassing Hitomi was proposed in 2017. [ 20 ] The FORCE (Focusing On Relativistic universe and Cosmic Evolution) space telescope is a candidate for the next ISAS competitive medium class mission. If selected, FORCE would be launched after the mid-2020s, with an eye towards conducting simultaneous observations with ATHENA. [ 20 ] [ 6 ]
Following the premature termination of the Hitomi mission, on 14 June 2016 JAXA announced their proposal to rebuild the satellite. [ 21 ] The XARM pre-project preparation team was formed in October 2016. [ 22 ] In the U.S. side, formulation began in the summer of 2017. [ 5 ] In June 2017, ESA announced that they would participate in XARM as a mission of opportunity. [ 14 ]
XRISM carries two instruments for studying the soft X-ray energy range, Resolve and Xtend. The satellite has telescopes for each of the instruments, SXT-I (Soft X-ray Telescope for Imager) and SXT-S (Soft X-ray Telescope for Spectrometer). [ 7 ] The pair of telescopes have a focal length of 5.6 m (18 ft). [ 2 ]
Resolve is an X-ray micro calorimeter developed by NASA and the Goddard Space Flight Center . [ 24 ] The instrument is a duplicate version of its Hitomi predecessor. It used some space-qualified hardware left from the manufacture of Hitomi 's SXS. [ 25 ]
Xtend is an X-ray CCD camera. Xtend improves on the energy resolution of Hitomi 's SXI. [ 26 ]
JAXA launched XRISM on 6 September 2023 at 23:42 UTC (7 September 08:42 Japan Standard Time) using an H-IIA rocket from Tanegashima Space Center . XRISM was successfully inserted into orbit on the same day, and the accompanying launch payload, SLIM , began its multi-month journey to the Moon. [ 1 ]
A protective shutter over the Resolve instrument's detector has failed to open. This does not prevent the instrument from operating, but limits it to observing X-rays of energy 1800 eV and above, as opposed to the planned 300 eV . [ 27 ] [ 28 ] A similar shutter over Xtend has opened normally.
|
https://en.wikipedia.org/wiki/X-Ray_Imaging_and_Spectroscopy_Mission
|
X-bracing is a structural engineering practice where the lateral load on a building is reduced by transferring the load into the exterior columns.
X-bracing was used in the construction of the 1908 Singer Building , then the tallest building in the world. [ 1 ]
Some skyscrapers by engineer Fazlur Khan , such as the 1969 John Hancock Center , have a distinctive X-bracing exterior, allowing for both higher performance from tall structures and the ability to open up the inside floorplan (and usable floor space) if the architect desires. [ 2 ]
This engineering-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/X-bracing
|
The X-factor in astrophysics, often labeled X CO , is an empirically determined proportionality constant which converts carbon monoxide (CO) emission line brightness to molecular hydrogen (H 2 ) mass. The term X-factor was coined in a 1983 paper titled "Gamma-rays from atomic and molecular gas in the first galactic quadrant" and published in The Astrophysical Journal . [ 1 ] [ 2 ]
Calibrating X CO requires an independent method of determining the amount of molecular hydrogen in a given astrophysical region. While direct emission from molecular hydrogen is difficult to observe, there are other ways of inferring molecular hydrogen mass, outlined below. [ 3 ]
This astrophysics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/X-factor_(astrophysics)
|
X-inactivation (also called Lyonization , after English geneticist Mary Lyon ) is a process by which one of the copies of the X chromosome is inactivated in therian female mammals . The inactive X chromosome is silenced by being packaged into a transcriptionally inactive structure called heterochromatin . As nearly all female mammals have two X chromosomes, X-inactivation prevents them from having twice as many X chromosome gene products as males, who only possess a single copy of the X chromosome (see dosage compensation ).
The choice of which X chromosome will be inactivated in a particular embryonic cell is random in placental mammals such as humans, but once an X chromosome is inactivated it will remain inactive throughout the lifetime of the cell and its descendants in the organism (its cell line). The result is that the choice of inactivated X chromosome in all the cells of the organism is a random distribution, often with about half the cells having the paternal X chromosome inactivated and half with an inactivated maternal X chromosome; but commonly, X-inactivation is unevenly distributed across the cell lines within one organism ( skewed X-inactivation ).
Unlike the random X-inactivation in placental mammals, inactivation in marsupials applies exclusively to the paternally-derived X chromosome.
The paragraphs below have to do only with rodents and do not reflect XI in the majority of mammals.
X-inactivation is part of the activation cycle of the X chromosome throughout the female life. The egg and the fertilized zygote initially use maternal transcripts, and the whole embryonic genome is silenced until zygotic genome activation . Thereafter, all mouse cells undergo an early, imprinted inactivation of the paternally-derived X chromosome in 4–8 cell stage embryos . [ 3 ] [ 4 ] [ 5 ] [ 6 ] The extraembryonic tissues (which give rise to the placenta and other tissues supporting the embryo) retain this early imprinted inactivation, and thus only the maternal X chromosome is active in these tissues.
In the early blastocyst , this initial, imprinted X-inactivation is reversed in the cells of the inner cell mass (which give rise to the embryo), and in these cells both X chromosomes become active again. Each of these cells then independently and randomly inactivates one copy of the X chromosome. [ 5 ] This inactivation event is irreversible during the lifetime of the individual, with the exception of the germline. In the female germline before meiotic entry, X-inactivation is reversed, so that after meiosis all haploid oocytes contain a single active X chromosome.
The Xi marks the inactive, Xa the active X chromosome. X P denotes the paternal, and X M to denotes the maternal X chromosome. When the egg (carrying X M ), is fertilized by a sperm (carrying a Y or an X P ) a diploid zygote forms. From zygote, through adult stage, to the next generation of eggs, the X chromosome undergoes the following changes:
The X activation cycle has been best studied in mice, but there are multiple studies in humans. As most of the evidence is coming from mice, the above scheme represents the events in mice. The completion of the meiosis is simplified here for clarity. Steps 1–4 can be studied in in vitro fertilized embryos, and in differentiating stem cells; X-reactivation happens in the developing embryo, and subsequent (6–7) steps inside the female body, therefore much harder to study.
The timing of each process depends on the species, and in many cases the precise time is actively debated. [The whole part of the human timing of X-inactivation in this table is highly questionable and should be removed until properly substantiated by empirical data]
The descendants of each cell which inactivated a particular X chromosome will also inactivate that same chromosome. This phenomenon, which can be observed in the coloration of tortoiseshell cats when females are heterozygous for the X-linked pigment gene, should not be confused with mosaicism , which is a term that specifically refers to differences in the genotype of various cell populations in the same individual; X-inactivation, which is an epigenetic change that results in a different phenotype, is not a change at the genotypic level. For an individual cell or lineage the inactivation is therefore skewed or ' non-random ', and this can give rise to mild symptoms in female 'carriers' of X-linked genetic disorders. [ 12 ]
Typical females possess two X chromosomes, and in any given cell one chromosome will be active (designated as Xa) and one will be inactive (Xi). However, studies of individuals with extra copies of the X chromosome show that in cells with more than two X chromosomes there is still only one Xa, and all the remaining X chromosomes are inactivated. This indicates that the default state of the X chromosome in females is inactivation, but one X chromosome is always selected to remain active. [ 13 ]
It is understood that X-chromosome inactivation is a random process, occurring at about the time of gastrulation in the epiblast (cells that will give rise to the embryo). The maternal and paternal X chromosomes have an equal probability of inactivation. This would suggest that women would be expected to suffer from X-linked disorders approximately 50% as often as men (because women have two X chromosomes, while men have only one); however, in actuality, the occurrence of these disorders in females is much lower than that. One explanation for this disparity is that 12–20% [ 14 ] of genes on the inactivated X chromosome remain expressed, thus providing women with added protection against defective genes coded by the X-chromosome. Some [ who? ] suggest that this disparity must be evidence of preferential (non-random) inactivation. Preferential inactivation of the paternal X-chromosome occurs in both marsupials and in cell lineages that form the membranes surrounding the embryo, [ 15 ] whereas in placental mammals either the maternally or the paternally derived X-chromosome may be inactivated in different cell lines. [ 16 ]
The time period for X-chromosome inactivation explains this disparity. Inactivation occurs in the epiblast during gastrulation, which gives rise to the embryo. [ 17 ] Inactivation occurs on a cellular level, resulting in a mosaic expression, in which patches of cells have an inactive maternal X-chromosome, while other patches have an inactive paternal X-chromosome. For example, a female heterozygous for haemophilia (an X-linked disease) would have about half of her liver cells functioning properly, which is typically enough to ensure normal blood clotting. [ 18 ] [ 19 ] Chance could result in significantly more dysfunctional cells; however, such statistical extremes are unlikely. Genetic differences on the chromosome may also render one X-chromosome more likely to undergo inactivation. Also, if one X-chromosome has a mutation hindering its growth or rendering it non viable, cells which randomly inactivated that X will have a selective advantage over cells which randomly inactivated the normal allele. Thus, although inactivation is initially random, cells that inactivate a normal allele (leaving the mutated allele active) will eventually be overgrown and replaced by functionally normal cells in which nearly all have the same X-chromosome activated. [ 18 ]
It is hypothesized that there is an autosomally-encoded 'blocking factor' which binds to the X chromosome and prevents its inactivation. [ 20 ] The model postulates that there is a limiting blocking factor, so once the available blocking factor molecule binds to one X chromosome the remaining X chromosome(s) are not protected from inactivation. This model is supported by the existence of a single Xa in cells with many X chromosomes and by the existence of two active X chromosomes in cell lines with twice the normal number of autosomes. [ 21 ]
Sequences at the X inactivation center ( XIC ), present on the X chromosome, control the silencing of the X chromosome. The hypothetical blocking factor is predicted to bind to sequences within the XIC.
The effect of female X heterozygosity is apparent in some localized traits, such as the unique coat pattern of a calico cat . It can be more difficult, however, to fully understand the expression of un-localized traits in these females, such as the expression of disease.
Since males only have one copy of the X chromosome, all expressed X-chromosomal genes (or alleles , in the case of multiple variant forms for a given gene in the population) are located on that copy of the chromosome. Females, however, will primarily express the genes or alleles located on the X-chromosomal copy that remains active. Considering the situation for one gene or multiple genes causing individual differences in a particular phenotype (i.e., causing variation observed in the population for that phenotype), in homozygous females it does not particularly matter which copy of the chromosome is inactivated, as the alleles on both copies are the same. However, in females that are heterozygous at the causal genes, the inactivation of one copy of the chromosome over the other can have a direct impact on their phenotypic value. Because of this phenomenon, there is an observed increase in phenotypic variation in females that are heterozygous at the involved gene or genes than in females that are homozygous at that gene or those genes. [ 22 ] There are many different ways in which the phenotypic variation can play out. In many cases, heterozygous females may be asymptomatic or only present minor symptoms of a given disorder, such as with X-linked adrenoleukodystrophy. [ 23 ]
The differentiation of phenotype in heterozygous females is furthered by the presence of X-inactivation skewing. Typically, each X-chromosome is silenced in half of the cells, but this process is skewed when preferential inactivation of a chromosome occurs. It is thought that skewing happens either by chance or by a physical characteristic of a chromosome that may cause it to be silenced more or less often, such as an unfavorable mutation. [ 24 ] [ 25 ]
On average, each X chromosome is inactivated in half of the cells, although 5-20% of women display X-inactivation skewing. [ 24 ] In cases where skewing is present, a broad range of symptom expression can occur, resulting in expression varying from minor to severe depending on the skewing proportion. An extreme case of this was seen where monozygotic female twins had extreme variance in expression of Menkes disease (an X-linked disorder) resulting in the death of one twin while the other remained asymptomatic. [ 26 ]
It is thought that X-inactivation skewing could be caused by issues in the mechanism that causes inactivation, or by issues in the chromosome itself. [ 24 ] [ 25 ] However, the link between phenotype and skewing is still being questioned, and should be examined on a case-by-case basis. A study looking at both symptomatic and asymptomatic females who were heterozygous for Duchenne and Becker muscular dystrophies (DMD) found no apparent link between transcript expression and skewed X-Inactivation. The study suggests that both mechanisms are independently regulated, and there are other unknown factors at play. [ 27 ]
The X-inactivation center (or simply XIC) on the X chromosome is necessary and sufficient to cause X-inactivation. Chromosomal translocations which place the XIC on an autosome lead to inactivation of the autosome, and X chromosomes lacking the XIC are not inactivated. [ 28 ] [ 29 ]
The XIC contains four non- translated RNA genes, Xist , Tsix , Jpx and Ftx , which are involved in X-inactivation. The XIC also contains binding sites for both known and unknown regulatory proteins . [ 30 ]
The X-inactive specific transcript ( Xist ) gene encodes a large non-coding RNA that is responsible for mediating the specific silencing of the X chromosome from which it is transcribed. [ 31 ] The inactive X chromosome is coated by Xist RNA, [ 32 ] whereas the Xa is not (See Figure to the right). X chromosomes that lack the Xist gene cannot be inactivated. [ 33 ] Artificially placing and expressing the Xist gene on another chromosome leads to silencing of that chromosome. [ 34 ] [ 28 ]
Prior to inactivation, both X chromosomes weakly express Xist RNA from the Xist gene. During the inactivation process, the future Xa ceases to express Xist, whereas the future Xi dramatically increases Xist RNA production. On the future Xi, the Xist RNA progressively coats the chromosome, spreading out from the XIC; [ 34 ] the Xist RNA does not localize to the Xa. The silencing of genes along the Xi occurs soon after coating by Xist RNA.
Like Xist, the Tsix gene encodes a large RNA which is not believed to encode a protein. The Tsix RNA is transcribed antisense to Xist, meaning that the Tsix gene overlaps the Xist gene and is transcribed on the opposite strand of DNA from the Xist gene. [ 29 ] Tsix is a negative regulator of Xist; X chromosomes lacking Tsix expression (and thus having high levels of Xist transcription) are inactivated much more frequently than normal chromosomes.
Like Xist, prior to inactivation, both X chromosomes weakly express Tsix RNA from the Tsix gene. Upon the onset of X-inactivation, the future Xi ceases to express Tsix RNA (and increases Xist expression), whereas Xa continues to express Tsix for several days.
Rep A is a long non coding RNA that works with another long non coding RNA, Xist, for X inactivation. Rep A inhibits the function of Tsix, the antisense of Xist, in conjunction with eliminating expression of Xite. It promotes methylation of the Tsix region by attracting PRC2 and thus inactivating one of the X chromosomes. [ 30 ]
The inactive X chromosome does not express the majority of its genes, unlike the active X chromosome. This is due to the silencing of the Xi by repressive heterochromatin , which compacts the Xi DNA and prevents the expression of most genes.
Compared to the Xa, the Xi has high levels of DNA methylation , low levels of histone acetylation , low levels of histone H3 lysine-4 methylation , and high levels of histone H3 lysine-9 methylation and H3 lysine-27 methylation mark which is placed by the PRC2 complex recruited by Xist , all of which are associated with gene silencing. [ 35 ] PRC2 regulates chromatin compaction and chromatin remodeling in several processes including the DNA damage response. [ 36 ] Additionally, a histone variant called macroH2A ( H2AFY ) is exclusively found on nucleosomes along the Xi. [ 37 ] [ 38 ]
DNA packaged in heterochromatin, such as the Xi, is more condensed than DNA packaged in euchromatin , such as the Xa. The inactive X forms a discrete body within the nucleus called a Barr body . [ 39 ] The Barr body is generally located on the periphery of the nucleus , is late replicating within the cell cycle , and, as it contains the Xi, contains heterochromatin modifications and the Xist RNA.
A fraction of the genes along the X chromosome escape inactivation on the Xi. The Xist gene is expressed at high levels on the Xi and is not expressed on the Xa. [ 40 ] Many other genes escape inactivation; some are expressed equally from the Xa and Xi, and others, while expressed from both chromosomes, are still predominantly expressed from the Xa. [ 41 ] [ 42 ] [ 43 ] Up to one quarter of genes on the human Xi are capable of escape. [ 41 ] Studies in the mouse suggest that in any given cell type, 3% to 15% of genes escape inactivation, and that escaping gene identity varies between tissues. [ 42 ] [ 43 ]
Many of the genes which escape inactivation are present along regions of the X chromosome which, unlike the majority of the X chromosome, contain genes also present on the Y chromosome . These regions are termed pseudoautosomal regions, as individuals of either sex will receive two copies of every gene in these regions (like an autosome), unlike the majority of genes along the sex chromosomes. Since individuals of either sex will receive two copies of every gene in a pseudoautosomal region , no dosage compensation is needed for females, so it is postulated that these regions of DNA have evolved mechanisms to escape X-inactivation. The genes of pseudoautosomal regions of the Xi do not have the typical modifications of the Xi and have little Xist RNA bound.
The existence of genes along the inactive X which are not silenced explains the defects in humans with atypical numbers of the X chromosome, such as Turner syndrome (X0, caused by SHOX gene [ 44 ] ) or Klinefelter syndrome (XXY). Theoretically, X-inactivation should eliminate the differences in gene dosage between affected individuals and individuals with a typical chromosome complement. In affected individuals, however, X-inactivation is incomplete and the dosage of these non-silenced genes will differ as they escape X-inactivation, similar to an autosomal aneuploidy .
The precise mechanisms that control escape from X-inactivation are not known, but silenced and escape regions have been shown to have distinct chromatin marks. [ 42 ] [ 45 ] It has been suggested that escape from X-inactivation might be mediated by expression of long non-coding RNA (lncRNA) within the escaping chromosomal domains. [ 2 ]
Stanley Michael Gartler used X-chromosome inactivation to demonstrate the clonal origin of cancers. Examining normal tissues and tumors from females heterozygous for isoenzymes of the sex-linked G6PD gene demonstrated that tumor cells from such individuals express only one form of G6PD, whereas normal tissues are composed of a nearly equal mixture of cells expressing the two different phenotypes. This pattern suggests that a single cell, and not a population, grows into a cancer. [ 46 ] However, this pattern has been proven wrong for many cancer types, suggesting that some cancers may be polyclonal in origin. [ 47 ]
Besides, measuring the methylation (inactivation) status of the polymorphic human androgen receptor (HUMARA) located on X-chromosome is considered the most accurate method to assess clonality in female cancer biopsies. [ 48 ] A great variety of tumors was tested by this method, some, such as renal cell carcinoma, [ 49 ] found monoclonal while others (e.g. mesothelioma [ 50 ] ) were reported polyclonal.
Researchers have also investigated using X-chromosome inactivation to silence the activity of autosomal chromosomes. For example, Jiang et al. inserted a copy of the Xist gene into one copy of chromosome 21 in stem cells derived from an individual with trisomy 21 ( Down syndrome ). [ 51 ] The inserted Xist gene induces Barr body formation, triggers stable heterochromatin modifications, and silences most of the genes on the extra copy of chromosome 21. In these modified stem cells, the Xist-mediated gene silencing seems to reverse some of the defects associated with Down syndrome.
In 1959 Susumu Ohno showed that the two X chromosomes of mammals were different: one appeared similar to the autosomes ; the other was condensed and heterochromatic. [ 52 ] This finding suggested, independently to two groups of investigators, that one of the X chromosomes underwent inactivation.
In 1961, Mary Lyon proposed the random inactivation of one female X chromosome to explain the mottled phenotype of female mice heterozygous for coat color genes . [ 53 ] The Lyon hypothesis also accounted for the findings that one copy of the X chromosome in female cells was highly condensed, and that mice with only one copy of the X chromosome developed as infertile females. This suggested [ 54 ] to Ernest Beutler , studying heterozygous females for glucose-6-phosphate dehydrogenase (G6PD) deficiency, that there were two red cell populations of erythrocytes in such heterozygotes: deficient cells and normal cells, [ 55 ] depending on whether the inactivated X chromosome (in the nucleus of the red cell's precursor cell) contains the normal or defective G6PD allele.
|
https://en.wikipedia.org/wiki/X-inactivation
|
Main Article : Sex linkage
X-linked recessive inheritance is a mode of inheritance in which a mutation in a gene on the X chromosome causes the phenotype to be always expressed in males (who are necessarily hemizygous for the gene mutation because they have one X and one Y chromosome ) and in females who are homozygous for the gene mutation (see zygosity ). Females with one copy of the mutated gene are carriers.
X-linked inheritance means that the gene causing the trait or the disorder is located on the X chromosome. Females have two X chromosomes while males have one X and one Y chromosome . Carrier females who have only one copy of the mutation do not usually express the phenotype, although differences in X-chromosome inactivation (known as skewed X-inactivation ) can lead to varying degrees of clinical expression in carrier females, since some cells will express one X allele and some will express the other. The current estimate of sequenced X-linked genes is 499, and the total, including vaguely defined traits, is 983. [ 1 ]
In humans, inheritance of X-linked recessive traits follows a unique pattern made up of three points.
A few scholars have suggested discontinuing the use of the terms dominant and recessive when referring to X-linked inheritance . [ 5 ] The possession of two X chromosomes in females leads to dosage issues which are alleviated by X-inactivation . [ 6 ] Stating that the highly variable penetrance of X-linked traits in females as a result of mechanisms such as skewed X-inactivation or somatic mosaicism is difficult to reconcile with standard definitions of dominance and recessiveness, scholars have suggested referring to traits on the X chromosome simply as X-linked. [ 5 ]
The most common X-linked recessive disorders are: [ 7 ]
Theoretically, a mutation in any of the genes on chromosome X may cause disease, but below are some notable ones, with short description of symptoms:
[Female X-linked disorders]
|
https://en.wikipedia.org/wiki/X-linked_recessive_inheritance
|
X-ray absorption fine structure (XAFS) is a specific structure observed in X-ray absorption spectroscopy (XAS). By analyzing the XAFS, information can be acquired on the local structure and on the unoccupied local electronic states.
The atomic X-ray absorption spectrum (XAS) of a core-level in an absorbing atom is separated into states in the discrete part of the spectrum called " bounds final states " or " Rydberg states " below the ionization potential (IP) and "states in the continuum " part of the spectrum above the ionization potential due to excitations of the photoelectron in the vacuum. Above the IP the absorption cross section attenuates gradually with the X-ray energy.
Following early experimental and theoretical works in the thirties, [ 1 ] in the sixties using synchrotron radiation at the National Bureau of Standards it was established that the broad asymmetric absorption peaks are due to Fano resonances above the atomic ionization potential where the final states are many body quasi-bound states (i.e., a doubly excited atom) degenerate with the continuum. [ 2 ]
The XAS spectra of condensed matter are usually divided in three energy regions:
The edge region usually extends in a range of few eV around the absorption edge. The spectral features in the edge region i) in good metals are excitations to final delocalized states above the Fermi level; ii) in insulators are core excitons below the ionization potential; iii) in molecules are electronic transitions to the first unoccupied molecular levels above the chemical potential in the initial states which are shifted into the discrete part of the core absorption spectrum by the Coulomb interaction with the core hole. Multi-electron excitations and configuration interaction between many body final states dominate the edge region in strongly correlated metals and insulators.
For many years the edge region was referred to as the “Kossel structure” but now it is known as "absorption edge region" since the Kossel structure refers only to unoccupied molecular final states which is a correct description only for few particular cases: molecules and strongly disordered systems.
The XANES energy region [ 3 ] extends between the edge region and the EXAFS region over a 50-100 eV energy range around the core level x-ray absorption threshold.
Before 1980 the XANES region was wrongly assigned to different final states: a) unoccupied total density of states, or b) unoccupied molecular orbitals (kossel structure) or c) unoccupied atomic orbitals or d) low energy EXAFS oscillations.
In the seventies, using synchrotron radiation in Frascati and Stanford synchrotron sources, it was experimentally shown that the features in this energy region are due to multiple scattering resonances of the photoelectron in a nanocluster of variable size. Antonio Bianconi in 1980 invented the acronym XANES to indicate the spectral region dominated by multiple scattering resonances of the photoelectron in the soft x-ray range [ 4 ] and in the hard X-ray range. [ 5 ] In the XANES energy range the kinetic energy of the photoelectron in the final state is between few eV and 50-100 eV. In this regime the photoelectron has a strong scattering amplitude by neighboring atoms in molecules and condensed matter, its wavelength is larger than interatomic distances, its mean free path could be smaller than one nanometer and finally the lifetime of the excited state is in the order of femtoseconds.
The XANES spectral features are described by full multiple scattering theory proposed in the early seventies. [ 6 ] Therefore, the key step for XANES interpretation is the determination of the size of the atomic cluster of neighbor atoms, where the final states are confined, which could range from 0.2 nm to 2 nm in different systems.
This energy region has been called later (in 1982) also near-edge X-ray absorption fine structure ( NEXAFS ), which is synonymous with XANES.
During more than 20 years the XANES interpretation has been object of discussion but recently there is agreement that the final states are "multiple scattering resonances" and many body final states play an important role. [ 7 ]
There is an intermediate region between the XANES and EXAFS regions where low n-body distribution functions play a key role. [ 8 ] [ 9 ] [ 10 ]
The oscillatory structure extending for hundreds of electron volts past the edges was called the “Kronig structure” after the scientist, Ralph Kronig , who assigned this structure in the high energy range ( i.e., for a kinetic energy range - larger than 100 eV - of the photoelectron in the weak scattering regime) to the single scattering of the excited photoelectron by neighbouring atoms in molecules and condensed matter. [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] This regime was called EXAFS in 1971 by Sayers, Stern and Lytle. [ 16 ] [ 17 ] and it developed only after the use of intense synchrotron radiation sources.
X-ray absorption edge spectroscopy corresponds to the transition from a core-level to an unoccupied orbital or band and mainly reflects the electronic unoccupied states. EXAFS, resulting from the interference in the single scattering process of the photoelectron scattered by surrounding atoms, provides information on the local structure. Information on the geometry of the local structure is provided by the analysis of the multiple scattering peaks in the XANES spectra.
The XAFS acronym has been later introduced to indicate the sum of the XANES and EXAFS spectra.
|
https://en.wikipedia.org/wiki/X-ray_absorption_fine_structure
|
X-ray absorption near edge structure ( XANES ), also known as near edge X-ray absorption fine structure ( NEXAFS ), is a type of absorption spectroscopy that indicates the features in the X-ray absorption spectra ( XAS ) of condensed matter due to the photoabsorption cross section for electronic transitions from an atomic core level to final states in the energy region of 50–100 eV above the selected atomic core level ionization energy, where the wavelength of the photoelectron is larger than the interatomic distance between the absorbing atom and its first neighbour atoms.
Both XANES and NEXAFS are acceptable terms for the same technique. XANES name was invented in 1980 by Antonio Bianconi to indicate strong absorption peaks in X-ray absorption spectra in condensed matter due to multiple scattering resonances above the ionization energy. [ 1 ] The name NEXAFS was introduced in 1983 by Jo Stohr and is synonymous with XANES, but is generally used when applied to surface and molecular science.
The fundamental phenomenon underlying XANES is the absorption of an x-ray photon by condensed matter with the formation of many body excited states characterized by a core hole in a selected atomic core level (refer to the first Figure). In the single-particle theory approximation, the system is separated into one electron in the core levels of the selected atomic species of the system and N-1 passive electrons. In this approximation the final state is described by a core hole in the atomic core level and an excited photoelectron. The final state has a very short life time because of the short life-time of the core hole and the short mean free path of the excited photoelectron with kinetic energy in the range around 20-50 eV. The core hole is filled either via an Auger process or by capture of an electron from another shell followed by emission of a fluorescent photon. The difference between NEXAFS and traditional photoemission experiments is that in photoemission, the initial photoelectron itself is measured, while in NEXAFS the fluorescent photon or Auger electron or an inelastically scattered photoelectron may also be measured. The distinction sounds trivial but is actually significant: in photoemission the final state of the emitted electron captured in the detector must be an extended, free-electron state. By contrast, in NEXAFS the final state of the photoelectron may be a bound state such as an exciton since the photoelectron itself need not be detected. The effect of measuring fluorescent photons, Auger electrons, and directly emitted electrons is to sum over all possible final states of the photoelectrons, meaning that what NEXAFS measures is the total joint density of states of the initial core level with all final states, consistent with conservation rules. The distinction is critical because in spectroscopy final states are more susceptible to many-body effects than initial states, meaning that NEXAFS spectra are more easily calculable than photoemission spectra. Due to the summation over final states, various sum rules are helpful in the interpretation of NEXAFS spectra. When the x-ray photon energy resonantly connects a core level with a narrow final state in a solid, such as an exciton, readily identifiable characteristic peaks will appear in the spectrum. These narrow characteristic spectral peaks give the NEXAFS technique a lot of its analytical power as illustrated by the B 1s π* exciton shown in the second Figure.
Synchrotron radiation has a natural polarization that can be utilized to great advantage in NEXAFS studies. The commonly studied molecular adsorbates have sigma and pi bonds that may have a particular orientation on a surface. The angle dependence of the x-ray absorption tracks the orientation of resonant bonds due to dipole selection rules .
Soft x-ray absorption spectra are usually measured either through the fluorescent yield, in which emitted photons are monitored, or total electron yield, in which the sample is connected to ground through an ammeter and the neutralization current is monitored. Because NEXAFS measurements require an intense tunable source of soft x-rays, they are performed at synchrotrons . Because soft x-rays are absorbed by air, the synchrotron radiation travels from the ring in an evacuated beam-line to the end-station where the specimen to be studied is mounted. Specialized beam-lines intended for NEXAFS studies often have additional capabilities such as heating a sample or exposing it to a dose of reactive gas.
In the absorption edge region of metals, the photoelectron is excited to the first unoccupied level above the Fermi level . Therefore, its mean free path in a pure single crystal at zero temperature is as large as infinite, and it remains very large, increasing the energy of the final state up to about 5 eV above the Fermi level. Beyond the role of the unoccupied density of states and matrix elements in single electron excitations, many-body effects appear as an "infrared singularity" at the absorption threshold in metals.
In the absorption edge region of insulators the photoelectron is excited to the first unoccupied level above the chemical potential but the unscreened core hole forms a localized bound state called core exciton .
The fine structure in the x-ray absorption spectra in the high energy range extending from about 150 eV beyond the ionization potential is a powerful tool to determine the atomic pair distribution (i.e. interatomic distances) with a time scale of about 10 −15 s.
In fact the final state of the excited photoelectron in the high kinetic energy range (150-2000 eV ) is determined only by single backscattering events due to the low amplitude photoelectron scattering.
In the NEXAFS region, starting about 5 eV beyond the absorption threshold, because of the low kinetic energy range (5-150 eV) the photoelectron backscattering amplitude by neighbor atoms is very large so that multiple scattering events become dominant in the NEXAFS spectra.
The different energy range between NEXAFS and EXAFS can be also explained in a very simple manner by the comparison between the photoelectron wavelength λ {\displaystyle \lambda } and the interatomic distance of the photoabsorber-backscatterer pair. The photoelectron kinetic energy is connected with the wavelength λ {\displaystyle \lambda } by the following relation:
which means that for high energy the wavelength is shorter than interatomic distances and hence the EXAFS region corresponds to a single scattering regime; while for lower E, λ {\displaystyle \lambda } is larger than interatomic distances and the XANES region is associated with a multiple scattering regime.
The absorption peaks of NEXAFS spectra are determined by multiple scattering resonances of the photoelectron excited at the atomic absorption site and scattered by neighbor atoms.
The local character of the final states is determined by the short photoelectron mean free path , that is strongly reduced (down to about 0.3 nm at 50 eV) in this energy range because of inelastic scattering of the photoelectron by electron-hole excitations ( excitons ) and collective electronic oscillations of the valence electrons called plasmons .
The great power of NEXAFS derives from its elemental specificity. Because the various elements have different core level energies, NEXAFS permits extraction of the signal from a surface monolayer or even a single buried layer in the presence of a huge background signal. Buried layers are very important in engineering applications, such as magnetic recording media buried beneath a surface lubricant or dopants below an electrode in an integrated circuit . Because NEXAFS can also determine the chemical state of elements which are present in bulk in minute quantities, it has found widespread use in environmental chemistry and geochemistry . The ability of NEXAFS to study buried atoms is due to its integration over all final states including inelastically scattered electrons, as opposed to photoemission and Auger spectroscopy, which study atoms only with a layer or two of the surface.
Much chemical information can be extracted from the NEXAFS region: formal valence (very difficult to experimentally determine in a nondestructive way); coordination environment (e.g., octahedral, tetrahedral coordination) and subtle geometrical distortions of it.
Transitions to bound vacant states just above the Fermi level can be seen. Thus NEXAFS spectra can be used as a probe of the unoccupied band structure of a material.
The near-edge structure is characteristic of an environment and valence state hence one of its more common uses is in fingerprinting: if you have a mixture of sites/compounds in a sample you can fit the measured spectra with a linear combinations of NEXAFS spectra of known species and determine the proportion of each site/compound in the sample. One example of such a use is the determination of the oxidation state of the plutonium in the soil at Rocky Flats .
The acronym XANES was first used in 1980 during interpretation of multiple scattering resonances spectra measured at the Stanford Synchrotron Radiation Laboratory (SSRL) by A. Bianconi. In 1982 the first paper on the application of XANES for determination of local structural geometrical distortions using multiple scattering theory was published by A. Bianconi, P. J. Durham and J. B. Pendry . In 1983 the first NEXAFS paper examining molecules adsorbed on surfaces appeared. The first XAFS paper, describing the intermediate region between EXAFS and XANES, appeared in 1987.
|
https://en.wikipedia.org/wiki/X-ray_absorption_near_edge_structure
|
X-ray absorption spectroscopy (XAS) is a set of advanced techniques used for probing the local environment of matter at atomic level and its electronic structure. [ 1 ] The experiments require access to synchrotron radiation facilities for their intense and tunable X-ray beams. Samples can be in the gas phase, solutions, or solids. [ 2 ]
XAS data are obtained by tuning the photon energy, [ 3 ] using a crystalline monochromator, to a range where core electrons can be excited (0.1-100 keV). The edges are, in part, named by which core electron is excited: the principal quantum numbers n = 1, 2, and 3, correspond to the K-, L-, and M-edges, respectively. [ 4 ] For instance, excitation of a 1s electron occurs at the K-edge , while excitation of a 2s or 2p electron occurs at an L-edge (Figure 1).
There are three main regions found on a spectrum generated by XAS data, which are then thought of as separate spectroscopic techniques (Figure 2):
XAS is a type of absorption spectroscopy from a core initial state with a well-defined symmetry; therefore, the quantum mechanical selection rules select the symmetry of the final states in the continuum, which are usually a mixture of multiple components. The most intense features are due to electric-dipole allowed transitions (i.e. Δℓ = ± 1) to unoccupied final states. For example, the most intense features of a K-edge are due to core transitions from 1s → p-like final states, while the most intense features of the L 3 -edge are due to 2p → d-like final states.
XAS methodology can be broadly divided into four experimental categories that can yield complementary results: metal K-edge , metal L-edge , ligand K-edge , and EXAFS.
The most obvious means of mapping heterogeneous samples beyond x-ray absorption contrast is through elemental analysis by x-ray fluorescence, similar to EDX methods in electron microscopy. [ 5 ]
XAS is a technique used in various scientific fields, including molecular and condensed matter physics , [ 6 ] [ 7 ] [ 8 ] materials science and engineering , chemistry , earth science , and biology . In particular, its unique sensitivity to the local structure, as compared to x-ray diffraction , has been exploited for studying:
|
https://en.wikipedia.org/wiki/X-ray_absorption_spectroscopy
|
X-ray astronomy is an observational branch of astronomy which deals with the study of X-ray observation and detection from astronomical objects . X-radiation is absorbed by the Earth's atmosphere , so instruments to detect X-rays must be taken to high altitude by balloons , sounding rockets , and satellites . X-ray astronomy uses a type of space telescope that can see x-ray radiation which standard optical telescopes , such as the Mauna Kea Observatories , cannot.
X-ray emission is expected from astronomical objects that contain extremely hot gases at temperatures from about a million kelvin (K) to hundreds of millions of kelvin (MK). Moreover, the maintenance of the E-layer of ionized gas high in the Earth's thermosphere also suggested a strong extraterrestrial source of X-rays. Although theory predicted that the Sun and the stars would be prominent X-ray sources, there was no way to verify this because Earth's atmosphere blocks most extraterrestrial X-rays. It was not until ways of sending instrument packages to high altitudes were developed that these X-ray sources could be studied.
The existence of solar X-rays was confirmed early in the mid-twentieth century by V-2s converted to sounding rockets , and the detection of extra-terrestrial X-rays has been the primary or secondary mission of multiple satellites since 1958. [ 1 ] The first cosmic (beyond the Solar System) X-ray source was discovered by a sounding rocket in 1962. Called Scorpius X-1 (Sco X-1) (the first X-ray source found in the constellation Scorpius ), the X-ray emission of Scorpius X-1 is 10,000 times greater than its visual emission, whereas that of the Sun is about a million times less. In addition, the energy output in X-rays is 100,000 times greater than the total emission of the Sun in all wavelengths .
Many thousands of X-ray sources have since been discovered. In addition, the intergalactic space in galaxy clusters is filled with a hot, but very dilute gas at a temperature between 100 and 1000 megakelvins (MK). The total amount of hot gas is five to ten times the total mass in the visible galaxies.
In 1927, E.O. Hulburt of the US Naval Research Laboratory and associates Gregory Breit and Merle A. Tuve of the Carnegie Institution of Washington explored the possibility of equipping Robert H. Goddard 's rockets to explore the upper atmosphere. "Two years later, he proposed an experimental program in which a rocket might be instrumented to explore the upper atmosphere, including detection of ultraviolet radiation and X-rays at high altitudes". [ 2 ]
In the late 1930s, the presence of a very hot, tenuous gas surrounding the Sun was inferred indirectly from optical coronal lines of highly ionized species. [ 3 ] The Sun has been known to be surrounded by a hot tenuous corona. [ 4 ] In the mid-1940s radio observations revealed a radio corona around the Sun. [ 3 ]
The beginning of the search for X-ray sources from above the Earth's atmosphere was on August 5, 1948 12:07 GMT. A US Army (formerly German) V-2 rocket as part of Project Hermes was launched from White Sands Proving Grounds . The first solar X-rays were recorded by T. Burnight. [ 5 ]
Through the 1960s, 70s, 80s, and 90s, the sensitivity of detectors increased greatly during the 60 years of X-ray astronomy. In addition, the ability to focus X-rays has developed enormously—allowing the production of high-quality images of many fascinating celestial objects.
The first sounding rocket flights for X-ray research were accomplished at the White Sands Missile Range in New Mexico with a V-2 rocket on January 28, 1949. A detector was placed in the nose cone section and the rocket was launched in a suborbital flight to an altitude just above the atmosphere. X-rays from the Sun were detected by the U.S. Naval Research Laboratory Blossom experiment on board. [ 6 ]
An Aerobee 150 rocket launched on June 19, 1962 (UTC) detected the first X-rays emitted from a source outside the Solar System [ 7 ] [ 8 ] (Scorpius X-1). [ 9 ] It is now known that such X-ray sources as Sco X-1 are compact stars , such as neutron stars or black holes . Material falling into a black hole may emit X-rays, but the black hole itself does not. The energy source for the X-ray emission is gravity . Infalling gas and dust is heated by the strong gravitational fields of these and other celestial objects. [ 10 ] Based on discoveries in this new field of X-ray astronomy, starting with Scorpius X-1, Riccardo Giacconi received the Nobel Prize in Physics in 2002. [ 11 ]
The largest drawback to rocket flights is their very short duration (just a few minutes above the atmosphere before the rocket falls back to Earth) and their limited field of view . A rocket launched from the United States will not be able to see sources in the southern sky; a rocket launched from Australia will not be able to see sources in the northern sky.
In astronomy, the interstellar medium (or ISM ) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxy. It fills interstellar space and blends smoothly into the surrounding intergalactic medium . The interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions , atoms , molecules , larger dust grains, cosmic rays , and (galactic) magnetic fields. [ 12 ] The energy that occupies the same volume, in the form of electromagnetic radiation , is the interstellar radiation field .
Of interest is the hot ionized medium (HIM) consisting of a coronal cloud ejection from star surfaces at 10 6 -10 7 K which emits X-rays. The ISM is turbulent and full of structure on all spatial scales. Stars are born deep inside large complexes of molecular clouds , typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures are stellar wind bubbles and superbubbles of hot gas. The Sun is currently traveling through the Local Interstellar Cloud , a denser region in the low-density Local Bubble .
To measure the spectrum of the diffuse X-ray emission from the interstellar medium over the energy range 0.07 to 1 keV, NASA launched a Black Brant 9 from White Sands Missile Range, New Mexico on May 1, 2008. [ 13 ] The Principal Investigator for the mission is Dr. Dan McCammon of the University of Wisconsin–Madison .
Balloon flights can carry instruments to altitudes of up to 40 km above sea level, where they are above as much as 99.997% of the Earth's atmosphere. Unlike a rocket where data are collected during a brief few minutes, balloons are able to stay aloft for much longer. However, even at such altitudes, much of the X-ray spectrum is still absorbed. X-rays with energies less than 35 keV (5,600 aJ) cannot reach balloons. On July 21, 1964, the Crab Nebula supernova remnant was discovered to be a hard X-ray (15–60 keV) source by a scintillation counter flown on a balloon launched from Palestine, Texas , United States. This was likely the first balloon-based detection of X-rays from a discrete cosmic X-ray source. [ 14 ]
The high-energy focusing telescope (HEFT) is a balloon-borne experiment to image astrophysical sources in the hard X-ray (20–100 keV) band. [ 15 ] Its maiden flight took place in May 2005 from Fort Sumner, New Mexico, USA. The angular resolution of HEFT is c. 1.5'. Rather than using a grazing-angle X-ray telescope , HEFT makes use of a novel tungsten -silicon multilayer coatings to extend the reflectivity of nested grazing-incidence mirrors beyond 10 keV. HEFT has an energy resolution of 1.0 keV full width at half maximum at 60 keV. HEFT was launched for a 25-hour balloon flight in May 2005. The instrument performed within specification and observed Tau X-1 , the Crab Nebula.
A balloon-borne experiment called the High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) observed X-ray and gamma-rays emissions from the Sun and other astronomical objects. [ 16 ] [ 17 ] It was launched from McMurdo Station , Antarctica in December 1991 and 1992. Steady winds carried the balloon on a circumpolar flight lasting about two weeks each time. [ 18 ]
The rockoon , a blend of rocket and balloon , was a solid fuel rocket that, rather than being immediately lit while on the ground, was first carried into the upper atmosphere by a gas-filled balloon. Then, once separated from the balloon at its maximum height, the rocket was automatically ignited. This achieved a higher altitude, since the rocket did not have to move through the lower thicker air layers that would have required much more chemical fuel.
The original concept of "rockoons" was developed by Cmdr. Lee Lewis, Cmdr. G. Halvorson, S. F. Singer, and James A. Van Allen during the Aerobee rocket firing cruise of the USS Norton Sound on March 1, 1949. [ 6 ]
From July 17 to July 27, 1956, the Naval Research Laboratory (NRL) shipboard launched eight Deacon rockoons for solar ultraviolet and X-ray observations at ~30° N ~121.6° W, southwest of San Clemente Island , apogee: 120 km. [ 19 ]
Satellites are needed because X-rays are absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray telescopes (XRTs) have varying directionality or imaging ability based on glancing angle reflection rather than refraction or large deviation reflection. [ 20 ] [ 21 ] This limits them to much narrower fields of view than visible or UV telescopes. The mirrors can be made of ceramic or metal foil. [ 22 ]
The first X-ray telescope in astronomy was used to observe the Sun. The first X-ray picture (taken with a grazing incidence telescope) of the Sun was taken in 1963, by a rocket-borne telescope. On April 19, 1960, the very first X-ray image of the sun was taken using a pinhole camera on an Aerobee-Hi rocket. [ 23 ]
The utilization of X-ray mirrors for extrasolar X-ray astronomy simultaneously requires:
X-ray astronomy detectors have been designed and configured primarily for energy and occasionally for wavelength detection using a variety of techniques usually limited to the technology of the time.
X-ray detectors collect individual X-rays (photons of X-ray electromagnetic radiation) and count the number of photons collected (intensity), the energy (0.12 to 120 keV) of the photons collected, wavelength (c. 0.008–8 nm), or how fast the photons are detected (counts per hour), to tell us about the object that is emitting them.
Several types of astrophysical objects emit, fluoresce, or reflect X-rays, from galaxy clusters , through black holes in active galactic nuclei (AGN) to galactic objects such as supernova remnants , stars, and binary stars containing a white dwarf ( cataclysmic variable stars and super soft X-ray sources ), neutron star or black hole ( X-ray binaries ). Some Solar System bodies emit X-rays, the most notable being the Moon , although most of the X-ray brightness of the Moon arises from reflected solar X-rays. A combination of many unresolved X-ray sources is thought to produce the observed X-ray background . The X-ray continuum can arise from bremsstrahlung , black-body radiation , synchrotron radiation , or what is called inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions. [ 24 ]
An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate mass star. [ 25 ]
Hercules X-1 is composed of a neutron star accreting matter from a normal star (HZ Herculis) probably due to Roche lobe overflow. X-1 is the prototype for the massive X-ray binaries although it falls on the borderline, ~2 M ☉ , between high- and low-mass X-ray binaries. [ 26 ]
In July 2020, astronomers reported the observation of a " hard tidal disruption event candidate " associated with ASASSN-20hx, located near the nucleus of galaxy NGC 6297, and noted that the observation represented one of the "very few tidal disruption events with hard powerlaw X-ray spectra ". [ 27 ] [ 28 ]
The celestial sphere has been divided into 88 constellations. The International Astronomical Union (IAU) constellations are areas of the sky. Each of these contains remarkable X-ray sources. Some of them have been identified from astrophysical modeling to be galaxies or black holes at the centers of galaxies. Some are pulsars . As with sources already successfully modeled by X-ray astrophysics, striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth . Constellations are an astronomical device for handling observation and precision independent of current physical theory or interpretation. Astronomy has been around for a long time. Physical theory changes with time. With respect to celestial X-ray sources, X-ray astrophysics tends to focus on the physical reason for X-ray brightness, whereas X-ray astronomy tends to focus on their classification, order of discovery, variability, resolvability, and their relationship with nearby sources in other constellations.
Within the constellations Orion and Eridanus and stretching across them is a soft X-ray "hot spot" known as the Orion-Eridanus Superbubble , the Eridanus Soft X-ray Enhancement , or simply the Eridanus Bubble , a 25° area of interlocking arcs of Hα emitting filaments. Soft X-rays are emitted by hot gas (T ~ 2–3 MK) in the interior of the superbubble. This bright object forms the background for the "shadow" of a filament of gas and dust. The filament is shown by the overlaid contours, which represent 100 micrometre emission from dust at a temperature of about 30 K as measured by IRAS . Here the filament absorbs soft X-rays between 100 and 300 eV, indicating that the hot gas is located behind the filament. This filament may be part of a shell of neutral gas that surrounds the hot bubble. Its interior is energized by ultraviolet (UV) light and stellar winds from hot stars in the Orion OB1 association. These stars energize a superbubble about 1200 lys across which is observed in the visual (Hα) and X-ray portions of the spectrum.
Usually observational astronomy is considered to occur on Earth's surface (or beneath it in neutrino astronomy ). The idea of limiting observation to Earth includes orbiting the Earth. As soon as the observer leaves the cozy confines of Earth, the observer becomes a deep space explorer. [ 29 ] Except for Explorer 1 and Explorer 3 and the earlier satellites in the series, [ 30 ] usually if a probe is going to be a deep space explorer it leaves the Earth or an orbit around the Earth.
For a satellite or space probe to qualify as a deep space X-ray astronomer/explorer or "astronobot"/explorer, all it needs to carry aboard is an XRT or X-ray detector and leave Earth's orbit.
Ulysses was launched October 6, 1990, and reached Jupiter for its " gravitational slingshot " in February 1992. It passed the south solar pole in June 1994 and crossed the ecliptic equator in February 1995. The solar X-ray and cosmic gamma-ray burst experiment (GRB) had 3 main objectives: study and monitor solar flares, detect and localize cosmic gamma-ray bursts , and in-situ detection of Jovian aurorae. Ulysses was the first satellite carrying a gamma burst detector which went outside the orbit of Mars. The hard X-ray detectors operated in the range 15–150 keV. The detectors consisted of 23-mm thick × 51-mm diameter CsI(Tl) crystals mounted via plastic light tubes to photomultipliers. The hard detector changed its operating mode depending on (1) measured count rate, (2) ground command, or (3) change in spacecraft telemetry mode. The trigger level was generally set for 8-sigma above background and the sensitivity is 10 −6 erg/cm 2 (1 nJ/m 2 ). When a burst trigger is recorded, the instrument switches to record high resolution data, recording it to a 32-kbit memory for a slow telemetry read out. Burst data consist of either 16 s of 8-ms resolution count rates or 64 s of 32-ms count rates from the sum of the 2 detectors. There were also 16 channel energy spectra from the sum of the 2 detectors (taken either in 1, 2, 4, 16, or 32 second integrations). During 'wait' mode, the data were taken either in 0.25 or 0.5 s integrations and 4 energy channels (with shortest integration time being 8 s). Again, the outputs of the 2 detectors were summed.
The Ulysses soft X-ray detectors consisted of 2.5-mm thick × 0.5 cm 2 area Si surface barrier detectors. A 100 mg/cm 2 beryllium foil front window rejected the low energy X-rays and defined a conical FOV of 75° (half-angle). These detectors were passively cooled and operate in the temperature range −35 to −55 °C. This detector had 6 energy channels, covering the range 5–20 keV.
Theoretical X-ray astronomy is a branch of theoretical astronomy that deals with the theoretical astrophysics and theoretical astrochemistry of X-ray generation , emission, and detection as applied to astronomical objects .
Like theoretical astrophysics , theoretical X-ray astronomy uses a wide variety of tools which include analytical models to approximate the behavior of a possible X-ray source and computational numerical simulations to approximate the observational data. Once potential observational consequences are available they can be compared with experimental observations. Observers can look for data that refutes a model or helps in choosing between several alternate or conflicting models.
Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
Most of the topics in astrophysics , astrochemistry , astrometry , and other fields that are branches of astronomy studied by theoreticians involve X-rays and X-ray sources. Many of the beginnings for a theory can be found in an Earth-based laboratory where an X-ray source is built and studied.
Dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid acts to maintain a magnetic field . This theory is used to explain the presence of anomalously long-lived magnetic fields in astrophysical bodies. If some of the stellar magnetic fields are really induced by dynamos, then field strength might be associated with rotation rate. [ 31 ]
From the observed X-ray spectrum, combined with spectral emission results for other wavelength ranges, an astronomical model addressing the likely source of X-ray emission can be constructed. For example, with Scorpius X-1 the X-ray spectrum steeply drops off as X-ray energy increases up to 20 keV, which is likely for a thermal-plasma mechanism. [ 24 ] In addition, there is no radio emission, and the visible continuum is roughly what would be expected from a hot plasma fitting the observed X-ray flux. [ 24 ] The plasma could be a coronal cloud of a central object or a transient plasma, where the energy source is unknown, but could be related to the idea of a close binary. [ 24 ]
In the Crab Nebula X-ray spectrum there are three features that differ greatly from Scorpius X-1: its spectrum is much harder, its source diameter is in light-years (ly)s, not astronomical units (AU), and its radio and optical synchrotron emission are strong. [ 24 ] Its overall X-ray luminosity rivals the optical emission and could be that of a nonthermal plasma. However, the Crab Nebula appears as an X-ray source that is a central freely expanding ball of dilute plasma, where the energy content is 100 times the total energy content of the large visible and radio portion, obtained from the unknown source. [ 24 ]
The "Dividing Line" as giant stars evolve to become red giants also coincides with the Wind and Coronal Dividing Lines. [ 32 ] To explain the drop in X-ray emission across these dividing lines, a number of models have been proposed:
High-mass X-ray binaries (HMXBs) are composed of OB supergiant companion stars and compact objects, usually neutron stars (NS) or black holes (BH). Supergiant X-ray binaries (SGXBs) are HMXBs in which the compact objects orbit massive companions with orbital periods of a few days (3–15 d), and in circular (or slightly eccentric) orbits. SGXBs show typical the hard X-ray spectra of accreting pulsars and most show strong absorption as obscured HMXBs. X-ray luminosity ( L x ) increases up to 10 36 erg·s −1 (10 29 watts). [ citation needed ]
The mechanism triggering the different temporal behavior observed between the classical SGXBs and the recently discovered supergiant fast X-ray transients (SFXT)s is still debated. [ 33 ]
The first detection of stellar x-rays occurred on April 5, 1974, with the detection of X-rays from Capella . [ 34 ] A rocket flight on that date briefly calibrated its attitude control system when a star sensor pointed the payload axis at Capella (α Aur). During this period, X-rays in the range 0.2–1.6 keV were detected by an X-ray reflector system co-aligned with the star sensor. [ 34 ] The X-ray luminosity of L x = 10 31 erg·s −1 (10 24 W) is four orders of magnitude above the Sun's X-ray luminosity. [ 34 ]
Coronal stars, or stars within a coronal cloud , are ubiquitous among the stars in the cool half of the Hertzsprung-Russell diagram . [ 3 ] Experiments with instruments aboard Skylab and Copernicus have been used to search for soft X-ray emission in the energy range ~0.14–0.284 keV from stellar coronae. [ 35 ] The experiments aboard ANS succeeded in finding X-ray signals from Capella and Sirius (α CMa). X-ray emission from an enhanced solar-like corona was proposed for the first time. [ 35 ] The high temperature of Capella's corona as obtained from the first coronal X-ray spectrum of Capella using HEAO 1 required magnetic confinement unless it was a free-flowing coronal wind. [ 3 ]
In 1977 Proxima Centauri is discovered to be emitting high-energy radiation in the XUV. In 1978, α Cen was identified as a low-activity coronal source. [ 36 ] With the operation of the Einstein observatory , X-ray emission was recognized as a characteristic feature common to a wide range of stars covering essentially the whole Hertzsprung-Russell diagram. [ 36 ] The Einstein initial survey led to significant insights:
To fit the medium-resolution spectrum of UX Arietis , subsolar abundances were required. [ 3 ]
Stellar X-ray astronomy is contributing toward a deeper understanding of
Current wisdom has it that the massive coronal main sequence stars are late-A or early F stars, a conjecture that is supported both by observation and by theory. [ 3 ]
Newly formed stars are known as pre-main-sequence stars during the stage of stellar evolution before they reach the main-sequence . Stars in this stage (ages <10 million years) produce X-rays in their stellar coronae. However, their X-ray emission is 10 3 to 10 5 times stronger than for main-sequence stars of similar masses. [ 37 ]
X-ray emission for pre–main-sequence stars was discovered by the Einstein Observatory . [ 38 ] [ 39 ] This X-ray emission is primarily produced by magnetic reconnection flares in the stellar coronae, with many small flares contributing to the "quiescent" X-ray emission from these stars. [ 40 ] Pre–main sequence stars have large convection zones, which in turn drive strong dynamos, producing strong surface magnetic fields. This leads to the high X-ray emission from these stars, which lie in the saturated X-ray regime, unlike main-sequence stars that show rotational modulation of X-ray emission. Other sources of X-ray emission include accretion hotspots [ 41 ] and collimated outflows. [ 42 ]
X-ray emission as an indicator of stellar youth is important for studies of star-forming regions. Most star-forming regions in the Milky Way Galaxy are projected on Galactic-Plane fields with numerous unrelated field stars. It is often impossible to distinguish members of a young stellar cluster from field-star contaminants using optical and infrared images alone. X-ray emission can easily penetrate moderate absorption from molecular clouds, and can be used to identify candidate cluster members. [ 43 ]
Given the lack of a significant outer convection zone, theory predicts the absence of a magnetic dynamo in earlier A stars. [ 3 ] In early stars of spectral type O and B, shocks developing in unstable winds are the likely source of X-rays. [ 3 ]
Beyond spectral type M5, the classical αω dynamo can no longer operate as the internal structure of dwarf stars changes significantly: they become fully convective. [ 3 ] As a distributed (or α 2 ) dynamo may become relevant, both the magnetic flux on the surface and the topology of the magnetic fields in the corona should systematically change across this transition, perhaps resulting in some discontinuities in the X-ray characteristics around spectral class dM5. [ 3 ] However, observations do not seem to support this picture: long-time lowest-mass X-ray detection, VB 8 (M7e V), has shown steady emission at levels of X-ray luminosity ( L X ) ≈ 10 26 erg·s −1 (10 19 W) and flares up to an order of magnitude higher. [ 3 ] Comparison with other late M dwarfs shows a rather continuous trend. [ 3 ]
Herbig Ae/Be stars are pre-main sequence stars. As to their X-ray emission properties, some are
The nature of these strong emissions has remained controversial with models including
The FK Com stars are giants of spectral type K with an unusually rapid rotation and signs of extreme activity. Their X-ray coronae are among the most luminous ( L X ≥ 10 32 erg·s −1 or 10 25 W) and the hottest known with dominant temperatures up to 40 MK. [ 3 ] However, the current popular hypothesis involves a merger of a close binary system in which the orbital angular momentum of the companion is transferred to the primary. [ 3 ]
Pollux is the brightest star in the constellation Gemini , despite its Beta designation, and the 17th brightest in the sky. Pollux is a giant orange K star that makes an interesting color contrast with its white "twin", Castor. Evidence has been found for a hot, outer, magnetically supported corona around Pollux, and the star is known to be an X-ray emitter. [ 44 ]
New X-ray observations by the Chandra X-ray Observatory show three distinct structures: an outer, horseshoe-shaped ring about 2 light years in diameter, a hot inner core about 3 light-months in diameter, and a hot central source less than 1 light-month in diameter which may contain the superstar that drives the whole show. The outer ring provides evidence of another large explosion that occurred over 1,000 years ago. These three structures around Eta Carinae are thought to represent shock waves produced by matter rushing away from the superstar at supersonic speeds. The temperature of the shock-heated gas ranges from 60 MK in the central regions to 3 MK on the horseshoe-shaped outer structure. "The Chandra image contains some puzzles for existing ideas of how a star can produce such hot and intense X-rays," says Prof. Kris Davidson of the University of Minnesota . [ 45 ] Davidson is principal investigator for the Eta Carina observations by the Hubble Space Telescope . "In the most popular theory, X-rays are made by colliding gas streams from two stars so close together that they'd look like a point source to us. But what happens to gas streams that escape to farther distances? The extended hot stuff in the middle of the new image gives demanding new conditions for any theory to meet." [ 45 ]
Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. The United States Air Force Academy (USAFA) is the home of the US's only undergraduate satellite program, and has and continues to develop the FalconLaunch sounding rockets. [ 46 ] In addition to any direct amateur efforts to put X-ray astronomy payloads into space, there are opportunities that allow student-developed experimental payloads to be put on board commercial sounding rockets as a free-of-charge ride. [ 47 ]
There are major limitations to amateurs observing and reporting experiments in X-ray astronomy: the cost of building an amateur rocket or balloon to place a detector high enough and the cost of appropriate parts to build a suitable X-ray detector.
As X-ray astronomy uses a major spectral probe to peer into the source, it is a valuable tool in efforts to understand many puzzles.
Magnetic fields are ubiquitous among stars, yet we do not understand precisely why, nor have we fully understood the bewildering variety of plasma physical mechanisms that act in stellar environments. [ 3 ] Some stars, for example, seem to have magnetic fields, fossil stellar magnetic fields left over from their period of formation, while others seem to generate the field anew frequently.
With the initial detection of an extrasolar X-ray source, the first question usually asked is "What is the source?" An extensive search is often made in other wavelengths such as visible or radio for possible coincident objects. Many of the verified X-ray locations still do not have readily discernible sources. X-ray astrometry becomes a serious concern that results in ever greater demands for finer angular resolution and spectral radiance .
There are inherent difficulties in making X-ray/optical, X-ray/radio, and X-ray/X-ray identifications based solely on positional coincidents, especially with handicaps in making identifications, such as the large uncertainties in positional determinants made from balloons and rockets, poor source separation in the crowded region toward the galactic center, source variability, and the multiplicity of source nomenclature. [ 48 ]
X‐ray source counterparts to stars can be identified by calculating the angular separation between source centroids and the position of the star. The maximum allowable separation is a compromise between a larger value to identify as many real matches as possible and a smaller value to minimize the probability of spurious matches. "An adopted matching criterion of 40" finds nearly all possible X‐ray source matches while keeping the probability of any spurious matches in the sample to 3%." [ 49 ]
All of the detected X-ray sources at, around, or near the Sun appear to be associated with processes in the corona , which is its outer atmosphere.
In the area of solar X-ray astronomy, there is the coronal heating problem . The photosphere of the Sun has an effective temperature of 5,570 K [ 50 ] yet its corona has an average temperature of 1–2 × 10 6 K. [ 51 ] However, the hottest regions are 8–20 × 10 6 K. [ 51 ] The high temperature of the corona shows that it is heated by something other than direct heat conduction from the photosphere. [ 52 ]
It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. [ 51 ] The first is wave heating, in which sound, gravitational or magnetohydrodynamic waves are produced by turbulence in the convection zone. [ 51 ] These waves travel upward and dissipate in the corona, depositing their energy in the ambient gas in the form of heat. [ 53 ] The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events— nanoflares . [ 54 ]
Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfvén waves have been found to dissipate or refract before reaching the corona. [ 55 ] In addition, Alfvén waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms. [ 51 ]
A coronal mass ejection (CME) is an ejected plasma consisting primarily of electrons and protons (in addition to small quantities of heavier elements such as helium, oxygen, and iron), plus the entraining coronal closed magnetic field regions. Evolution of these closed magnetic structures in response to various photospheric motions over different time scales (convection, differential rotation, meridional circulation) somehow leads to the CME. [ 56 ] Small-scale energetic signatures such as plasma heating (observed as compact soft X-ray brightening) may be indicative of impending CMEs.
The soft X-ray sigmoid (an S-shaped intensity of soft X-rays) is an observational manifestation of the connection between coronal structure and CME production. [ 56 ] "Relating the sigmoids at X-ray (and other) wavelengths to magnetic structures and current systems in the solar atmosphere is the key to understanding their relationship to CMEs." [ 56 ]
The first detection of a Coronal mass ejection (CME) as such was made on December 1, 1971, by R. Tousey of the US Naval Research Laboratory using OSO 7 . [ 57 ] Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing.
The largest geomagnetic perturbation, resulting presumably from a "prehistoric" CME, coincided with the first-observed solar flare, in 1859. The flare was observed visually by Richard Christopher Carrington and the geomagnetic storm was observed with the recording magnetograph at Kew Gardens . The same instrument recorded a crotchet , an instantaneous perturbation of the Earth's ionosphere by ionizing soft X-rays. This could not easily be understood at the time because it predated the discovery of X-rays (by Roentgen ) and the recognition of the ionosphere (by Kennelly and Heaviside ).
A microquasar is a smaller cousin of a quasar that is a radio emitting X-ray binary , with an often resolvable pair of radio jets. LSI+61°303 is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01.
Observations are revealing a growing number of recurrent X-ray transients , characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs).
Observations made by Chandra indicate the presence of loops and rings in the hot X-ray emitting gas that surrounds Messier 87 . A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of copious amounts of high-energy electromagnetic radiation, particularly X-rays and gamma rays .
During the solar cycle, as shown in the sequence of images at right, at times the Sun is almost X-ray dark, almost an X-ray variable. Betelgeuse , on the other hand, appears to be always X-ray dark. Hardly any X-rays are emitted by red giants. There is a rather abrupt onset of X-ray emission around spectral type A7-F0, with a large range of luminosities developing across spectral class F. Altair is spectral type A7V and Vega is A0V. Altair's total X-ray luminosity is at least an order of magnitude larger than the X-ray luminosity for Vega. The outer convection zone of early F stars is expected to be very shallow and absent in A-type dwarfs, yet the acoustic flux from the interior reaches a maximum for late A and early F stars provoking investigations of magnetic activity in A-type stars along three principal lines. Chemically peculiar stars of spectral type Bp or Ap are appreciable magnetic radio sources, most Bp/Ap stars remain undetected, and of those reported early on as producing X-rays only few of them can be identified as probably single stars. X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area."
X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area." [ 3 ]
As X-ray detectors have become more sensitive, they have observed that some planets and other normally X-ray non-luminescent celestial objects under certain conditions emit, fluoresce, or reflect X-rays. [ citation needed ]
NASA's Swift Gamma-Ray Burst Mission satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. "The solar wind—a fast-moving stream of particles from the sun—interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the Sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet. [ 58 ]
|
https://en.wikipedia.org/wiki/X-ray_astronomy
|
X‑ray birefringence imaging [ 1 ] (XBI) can be considered the X‑ray analogue of the polarizing optical microscope . XBI uses linearly polarized X-rays with an energy tuned to an elemental absorption edge . The tuned X-rays interact solely with the absorbing element, thus allowing the local anisotropy of the bonding environment of the X‑ray absorbing element to be studied. [ 2 ] [ 3 ] Due to the requirement of linearly polarized tunable X-rays a synchrotron source is necessary. Interaction with the bonding environment of the selected element in the sample changes the incident X-ray polarization plane. A polarization analyzer is used to diffract the rotated component of the polarization plane to an area detector. The greater the vertical component of the polarization plane the greater the intensity observed on the detector. In this way, it is possible to study the distribution of bond environments containing the X-ray absorbing element in a spatially resolved manner.
The XBI technique has been shown to be a sensitive method for spatially resolved mapping of the local orientational properties of anisotropic materials. In the case of organic materials, the technique may be applied to study the orientational properties of individual molecules and/or bonds (most applications of the technique so far have focused on studies of orientational ordering of C–Br bonds, from XBI measurements carried out using incident linearly polarized X-rays tuned to the bromine K-edge). Applications of the technique have included the study of changes in molecular orientations associated with order-disorder phase transitions in solids [ 1 ] and characterization of phase transitions in liquid crystalline materials. [ 4 ] XBI can also be exploited for spatially resolved analysis of orientationally distinct domains in materials, giving information the sizes of domains, the orientational relationships between domains, and the nature of domain boundaries.
|
https://en.wikipedia.org/wiki/X-ray_birefringence_imaging
|
X-ray crystal truncation rod scattering is a powerful method in surface science , based on analysis of surface X-ray diffraction (SXRD) patterns from a crystalline surface.
For an infinite crystal , the diffracted pattern is concentrated in Dirac delta function like Bragg peaks . Presence of crystalline surfaces results in additional structure along so-called truncation rods (linear regions in momentum space normal to the surface). Crystal Truncation Rod (CTR) measurements allow detailed determination of atomic structure at the surface, especially useful in cases of oxidation , epitaxial growth, and adsorption studies on crystalline surfaces.
A particle incident on a crystalline surface with momentum K 0 {\displaystyle K_{0}} will undergo scattering through a momentum change of Q {\displaystyle \mathbf {Q} } . If x {\displaystyle x} and y {\displaystyle y} represent directions in the plane of the surface and z {\displaystyle z} is perpendicular to the surface, then the scattered intensity as a function of all possible values of Q {\displaystyle \mathbf {Q} } is given by
Where α {\displaystyle \alpha } is the penetration coefficient, defined as the ratio of x-ray amplitudes scattered from successive planes of atoms in the crystal, and a x {\displaystyle a_{x}} , a y {\displaystyle a_{y}} , and c {\displaystyle c} are the lattice spacings in the x, y, and z directions, respectively. [ 1 ]
In the case of perfect absorption, α = 0 {\displaystyle \alpha =0} , and the intensity becomes independent of Q z {\displaystyle Q_{z}} , with a maximum for any Q ∥ {\displaystyle \mathbf {Q_{\parallel }} } (the component of Q {\displaystyle \mathbf {Q} } parallel to the crystal surface) that satisfies the 2D Laue condition in reciprocal space
for integers h {\displaystyle h} and k {\displaystyle k} . This condition results in rods of intensity in reciprocal space , oriented perpendicular to the surface and passing through the reciprocal lattice points of the surface, as in Fig. 1. These rods are known as diffraction rods, or crystal truncation rods.
When α {\displaystyle \alpha } is allowed to vary from 0, the intensity along the rods varies according to Fig. 2. Note that in the limit as α {\displaystyle \alpha } approaches unity, the x-rays are fully penetrating, and the scattered intensity approaches a periodic delta function, as in bulk diffraction.
This calculation has been done according to the kinematic (single-scattering) approximation. This has been shown to be accurate to within a factor of 10 − 7 {\displaystyle 10^{-7}} of the peak intensity. Adding dynamical (multiple-scattering) considerations to the model can result in even more accurate predictions of CTR intensity. [ 2 ]
To obtain high-quality data in X-ray CTR measurements, it is desirable that the detected intensity be on the order of at least 10 9 p h o t o n s m m 2 s {\displaystyle 10^{9}{\tfrac {photons}{mm^{2}s}}} [ citation needed ] . To achieve this level of output, the X-ray source must typically be a synchrotron source . More traditional, inexpensive sources such as rotating anode sources provide 2-3 orders of magnitude less X-ray flux and are only suitable for studying high-atomic number materials, which return a higher diffracted intensity. The maximum diffracted intensity is roughly proportional to the square of the atomic number, Z {\displaystyle Z} . [ 3 ] Anode X-ray sources have been successfully used to study gold ( Z = 79 {\displaystyle Z=79} ) for example. [ 4 ]
When doing X-ray measurements of a surface, the sample is held in Ultra-High Vacuum and the X-rays pass into and out of the UHV chamber through Beryllium windows. There are 2 approaches to chamber and diffractometer design that are in use. In the first method, the sample is fixed relative to the vacuum chamber, which is kept as small and light as possible and mounted on the diffractometer. In the second method, the sample is rotated within the chamber by bellows coupled to the outside. This approach avoids putting a large mechanical load on the diffractometer goniometer, making it easier to maintain fine angular resolution. One drawback of many configurations is that the sample must be moved in order to use other surface analysis methods such as LEED or AES , and after moving the sample back into the X-ray diffraction position, it must be realigned. In some setups, the sample chamber can be detached from the diffractometer without breaking vacuum, allowing for other users to have access. For examples of X-ray CTR diffractometer apparatus, see refs 15–17 in [ 3 ]
For a given incidence angle of X-rays onto a surface, only the intersections of the crystal truncation rods with the Ewald sphere can be observed. To measure the intensity along a CTR, the sample must be rotated in the X-ray beam so that the origin of the Ewald sphere is translated and the sphere intersects the rod at a different location in reciprocal space. Performing a rodscan in this way requires accurate coordinated motion of the sample and the detector along different axes. To achieve this motion, the sample and detector are mounted in an apparatus called a four-circle diffractometer. The sample is rotated in the plane bisecting the incoming and diffracted beam and the detector is moved into the position necessary to capture the diffracted CTR intensity.
Surface features in a material produce variations in the CTR intensity, which can be measured and used to evaluate what surface structures may be present. Two examples of this are shown in Fig. 3. In the case of a miscut at an angle α {\displaystyle \alpha } , a second set of rods is produced in reciprocal space called superlattice rods, tilted from the regular lattice rods by the same angle, α {\displaystyle \alpha } . The X-ray intensity is strongest in the region of intersection between the lattice rods (grey bars) and superlattice rods (black lines). In the case of ordered alternating steps, the CTR intensity is chopped into segments, as shown. In real materials, the occurrence of surface features will rarely be so regular, but these two examples show the way in which surface miscuts and roughness are manifested in the obtained diffraction patterns.
|
https://en.wikipedia.org/wiki/X-ray_crystal_truncation_rod
|
X-ray crystallography is the experimental science of determining the atomic and molecular structure of a crystal , in which the crystalline structure causes a beam of incident X-rays to diffract in specific directions. By measuring the angles and intensities of the X-ray diffraction , a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal and the positions of the atoms, as well as their chemical bonds , crystallographic disorder , and other information.
X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms , the lengths and types of chemical bonds, and the atomic-scale differences between various materials, especially minerals and alloys . The method has also revealed the structure and function of many biological molecules, including vitamins , drugs, proteins and nucleic acids such as DNA . X-ray crystallography is still the primary method for characterizing the atomic structure of materials and in differentiating materials that appear similar in other experiments. X-ray crystal structures can also help explain unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases .
Modern work involves a number of steps all of which are important. The preliminary steps include preparing good quality samples, careful recording of the diffracted intensities, and processing of the data to remove artifacts. A variety of different methods are then used to obtain an estimate of the atomic structure, generically called direct methods. With an initial estimate further computational techniques such as those involving difference maps are used to complete the structure. The final step is a numerical refinement of the atomic positions against the experimental data, sometimes assisted by ab-initio calculations. In almost all cases new structures are deposited in databases available to the international community.
Crystals, though long admired for their regularity and symmetry , were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow) (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles. [ 1 ] The Danish scientist Nicolas Steno (1669) pioneered experimental investigations of crystal symmetry. Steno showed that the angles between the faces are the same in every exemplar of a particular type of crystal ( law of constancy of interfacial angles ). [ 2 ] René Just Haüy (1784) discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size ( law of decrements ). Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which remain in use for identifying crystal faces. Haüy's study led to the idea that crystals are a regular three-dimensional array (a Bravais lattice ) of atoms and molecules ; a single unit cell is repeated indefinitely along three principal directions. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johan Hessel , [ 3 ] Auguste Bravais , [ 4 ] Evgraf Fedorov , [ 5 ] Arthur Schönflies [ 6 ] and (belatedly) William Barlow (1894). Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; [ 7 ] however, the available data were too scarce in the 1880s to accept his models as conclusive.
Wilhelm Röntgen discovered X-rays in 1895. [ 8 ] Physicists were uncertain of the nature of X-rays, but suspected that they were waves of electromagnetic radiation . The Maxwell theory of electromagnetic radiation was well accepted, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Barkla created the x-ray notation for sharp spectral lines, noting in 1909 two separate energies, at first naming them "A" and "B" and then supposing that there may be lines prior to "A", he started an alphabet numbering beginning with "K." [ 9 ] [ 10 ] Single-slit experiments in the laboratory of Arnold Sommerfeld suggested that X-rays had a wavelength of about 1 angstrom . [ 11 ] X-rays are not only waves but also have particle properties causing Sommerfeld to coin the name Bremsstrahlung for the continuous spectra when they were formed when electrons bombarded a material. [ 10 ] Albert Einstein introduced the photon concept in 1905, [ 12 ] but it was not broadly accepted until 1922, [ 13 ] [ 14 ] when Arthur Compton confirmed it by the scattering of X-rays from electrons. [ 15 ] The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. [ 16 ] [ 17 ] [ 18 ] [ 19 ] Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 [ 20 ] confirmed that X-rays are a form of electromagnetic radiation.
The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light , since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction on a photographic plate . After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. The results were presented to the Bavarian Academy of Sciences and Humanities in June 1912 as "Interferenz-Erscheinungen bei Röntgenstrahlen" (Interference phenomena in X-rays). [ 20 ] [ 21 ] Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914. [ 22 ]
After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg . In 1912–1913, the younger Bragg developed Bragg's law , which connects the scattering with evenly spaced planes within a crystal. [ 8 ] [ 23 ] [ 24 ] [ 25 ] The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple; as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated arrangements of atoms.
The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be "solved" (i.e., determined) in 1914 was that of table salt . [ 26 ] [ 27 ] [ 28 ] The distribution of electrons in the table-salt structure showed that crystals are not necessarily composed of covalently bonded molecules, and proved the existence of ionic compounds . [ 29 ] The structure of diamond was solved in the same year, [ 30 ] [ 31 ] proving the tetrahedral arrangement of its chemical bonds and showing that the length of C–C single bond was about 1.52 angstroms. Other early structures included copper, [ 32 ] calcium fluoride (CaF 2 , also known as fluorite ), calcite (CaCO 3 ) and pyrite (FeS 2 ) [ 33 ] in 1914; spinel (MgAl 2 O 4 ) in 1915; [ 34 ] [ 35 ] the rutile and anatase forms of titanium dioxide (TiO 2 ) in 1916; [ 36 ] pyrochroite (Mn(OH) 2 ) and, by extension, brucite (Mg(OH) 2 ) in 1919. [ 37 ] [ 38 ] Also in 1919, sodium nitrate (NaNO 3 ) and caesium dichloroiodide (CsICl 2 ) were determined by Ralph Walter Graystone Wyckoff , and the wurtzite (hexagonal ZnS) structure was determined in 1920. [ 39 ]
The structure of graphite was solved in 1916 [ 40 ] by the related method of powder diffraction , [ 41 ] which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917. [ 42 ] The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently. [ 43 ] [ 44 ] Hull also used the powder method to determine the structures of various metals, such as iron [ 45 ] and magnesium. [ 46 ]
X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions . The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure, [ 30 ] the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV), [ 47 ] and the resonance observed in the planar carbonate group [ 33 ] and in aromatic molecules. [ 48 ] Kathleen Lonsdale 's 1928 structure of hexamethylbenzene [ 49 ] established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry. [ 50 ] Her conclusions were anticipated by William Henry Bragg , who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement . [ 48 ] [ 51 ]
The first structure of an organic compound, hexamethylenetetramine , was solved in 1923. [ 52 ] This was rapidly followed by several studies of different long-chain fatty acids , which are an important component of biological membranes . [ 53 ] [ 54 ] [ 55 ] [ 56 ] [ 57 ] [ 58 ] [ 59 ] [ 60 ] [ 61 ] In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine , [ 62 ] a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme , corrin and chlorophyll .
In the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile , brookite and anatase forms of titanium dioxide .
The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order ; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry , such as metal-metal double bonds, [ 63 ] [ 64 ] [ 65 ] metal-metal quadruple bonds, [ 66 ] [ 67 ] [ 68 ] and three-center, two-electron bonds . [ 69 ] X-ray crystallography—or, strictly speaking, an inelastic Compton scattering experiment—has also provided evidence for the partly covalent character of hydrogen bonds . [ 70 ] In the field of organometallic chemistry , the X-ray structure of ferrocene initiated scientific studies of sandwich compounds , [ 71 ] [ 72 ] while that of Zeise's salt stimulated research into "back bonding" and metal-pi complexes. [ 73 ] [ 74 ] [ 75 ] [ 76 ] Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry , particularly in clarifying the structures of the crown ethers and the principles of host–guest chemistry . [ citation needed ]
The application of X-ray crystallography to mineralogy began with the structure of garnet , which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si / O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy also occurred in the mid-1920s. [ 78 ] [ 79 ] [ 80 ] [ 81 ] [ 82 ] [ 83 ] Most notably, Linus Pauling 's structure of the alloy Mg 2 Sn [ 84 ] led to his theory of the stability and structure of complex ionic crystals. [ 85 ] Many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes , metalloporphyrins , and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry . The Cambridge Structural Database contains over 1,000,000 structures as of June 2019; most of these structures were determined by X-ray crystallography. [ 86 ]
On October 17, 2012, the Curiosity rover on the planet Mars at " Rocknest " performed the first X-ray diffraction analysis of Martian soil . The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar , pyroxenes and olivine , and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils " of Hawaiian volcanoes . [ 77 ]
X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin , who solved the structures of cholesterol (1937), penicillin (1946) and vitamin B 12 (1956), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin , on which she worked for over thirty years. [ 87 ]
Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Sir John Cowdery Kendrew , [ 88 ] for which he shared the Nobel Prize in Chemistry with Max Perutz in 1962. [ 89 ] Since that success, 190,000 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. [ 90 ] The nearest competing method in number of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy , which has resolved less than one tenth as many. [ 91 ] Crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 k Da ). X-ray crystallography is used routinely to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it. [ 92 ] However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other denaturants to solubilize them in isolation, and such detergents often interfere with crystallization. Membrane proteins are a large component of the genome , and include many proteins of great physiological importance, such as ion channels and receptors . [ 93 ] [ 94 ] Helium cryogenics are used to prevent radiation damage in protein crystals. [ 95 ]
Two limiting cases of X-ray crystallography—"small-molecule" (which includes continuous inorganic solids) and "macromolecular" crystallography—are often used. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit ; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. In contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved; the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses and proteins with hundreds of thousands of atoms, through improved crystallographic imaging and technology. [ 96 ]
The technique of single-crystal X-ray crystallography has three basic steps. The first—and often most difficult—step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning . [ 97 ]
In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength ( monochromatic X-rays ), producing the regular pattern of reflections. The angles and intensities of diffracted X-rays are measured, with each compound having a unique diffraction pattern. [ 98 ] As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections. [ 99 ]
In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement—now called a crystal structure —is usually stored in a public database. [ 100 ]
Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or synthetic materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with macromolecular crystal annealing [ 101 ] [ 102 ] [ 103 ] and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure. [ 104 ]
Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallization . By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out so as to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded ; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded. [ citation needed ]
Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal. [ 105 ] [ 106 ] The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (subsequent growth). The solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever. Other approaches involve crystallizing proteins under oil, where aqueous protein solutions are dispensed under liquid oil, and water evaporates through the layer of oil. Different oils have different evaporation permeabilities, therefore yielding changes in concentration rates from different percipient/protein mixture. [ 107 ]
It is difficult to predict good conditions for nucleation or growth of well-ordered crystals. [ 108 ] In practice, favorable conditions are identified by screening ; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested. [ 109 ] Hundreds, even thousands, of solution conditions are generally tried before finding the successful one. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities ( milligrams ) of crystallization-grade protein, robots have been developed that are capable of accurately dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means that 10-fold less protein is used per experiment when compared to crystallization trials set up by hand (in the order of 1 microliter ). [ 110 ]
Several factors are known to inhibit crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. [ citation needed ] Crystals can be marred by twinning , which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior. [ citation needed ]
The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of mounting. In the past, crystals were loaded into glass capillaries with the crystallization solution (the mother liquor ). Crystals of small molecules are typically attached with oil or glue to a glass fiber or a loop, which is made of nylon or plastic and attached to a solid rod. Protein crystals are scooped up by a loop, then flash-frozen with liquid nitrogen . [ 111 ] This freezing reduces the radiation damage of the X-rays, as well as thermal motion (the Debye-Waller effect). However, untreated protein crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing. [ 112 ] This pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error. [ citation needed ]
The capillary or loop is mounted on a goniometer , which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer", which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer. [ citation needed ]
The relative intensities of the reflections provides information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film , an area detector (such as a pixel detector ) or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point. [ citation needed ]
One set of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full three dimensional set. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space , due to the curvature of the Ewald sphere . However, if the crystal has a higher symmetry, a smaller angular range such as 90° or 45° may be recorded. The rotation axis should be changed at least once, to avoid developing a "blind spot" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5–2°) to catch a broader region of reciprocal space. [ citation needed ]
Multiple data sets may be necessary for certain phasing methods. For example, multi-wavelength anomalous dispersion phasing requires that the scattering be recorded at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken. [ 113 ]
The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional set. Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group . Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 230 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine. [ 114 ] Having assigned symmetry, the data is then integrated . This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)).
A full data set may consist of hundreds of separate images taken at different orientations of the crystal. These have to be merged and scaled using peaks that appear in two or more images ( merging ) and scaling so there is a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows calculating the symmetry-related R-factor , a reliability index based upon how similar are the measured intensities of symmetry-equivalent reflections, [ clarification needed ] thus assessing the quality of the data.
The intensity of each diffraction 'spot' is proportional to the modulus squared of the structure factor . The structure factor is a complex number containing information relating to both the amplitude and phase of a wave . In order to obtain an interpretable electron density map , both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem . Initial phase estimates can be obtained in a variety of ways:
Having obtained initial phases, an initial model can be built. The atomic positions in the model and their respective Debye-Waller factors (or B -factors, accounting for the thermal motion of the atom) can be refined to fit the observed diffraction data, ideally yielding a better set of phases. A new model can then be fit to the new electron density map and successive rounds of refinement are carried out. This iterative process continues until the correlation between the diffraction data and the model is maximized. The agreement is measured by an R -factor defined as
where F is the structure factor . A similar quality criterion is R free , which is calculated from a subset (~10%) of reflections that were not included in the structure refinement. Both R factors depend on the resolution of the data. As a rule of thumb, R free should be approximately the resolution in angstroms divided by 10; thus, a data-set with 2 Å resolution should yield a final R free ~ 0.2. Chemical bonding features such as stereochemistry, hydrogen bonding and distribution of bond lengths and angles are complementary measures of the model quality. In iterative model building, it is common to encounter phase bias or model bias: because phase estimations come from the model, each round of calculated map tends to show density wherever the model has density, regardless of whether there truly is a density. This problem can be mitigated by maximum-likelihood weighting and checking using omit maps . [ 121 ]
It may not be possible to observe every atom in the asymmetric unit. In many cases, crystallographic disorder smears the electron density map. Weakly scattering atoms such as hydrogen are routinely invisible. It is also possible for a single atom to appear multiple times in an electron density map, e.g., if a protein sidechain has multiple (<4) allowed conformations. In still other cases, the crystallographer may detect that the covalent structure deduced for the molecule was incorrect, or changed. For example, proteins may be cleaved or undergo post-translational modifications that were not detected prior to the crystallization.
A common challenge in refinement of crystal structures results from crystallographic disorder. Disorder can take many forms but in general involves the coexistence of two or more species or conformations. Failure to recognize disorder results in flawed interpretation. Pitfalls from improper modeling of disorder are illustrated by the discounted hypothesis of bond stretch isomerism . [ 122 ] Disorder is modelled with respect to the relative population of the components, often only two, and their identity. In structures of large molecules and ions, solvent and counterions are often disordered.
The use of computational methods for the powder X-ray diffraction data analysis is now generalized. It typically compares the experimental data to the simulated diffractogram of a model structure, taking into account the instrumental parameters, and refines the structural or microstructural parameters of the model using least squares based minimization algorithm. Most available tools allowing phase identification and structural refinement are based on the Rietveld method , [ 123 ] [ 124 ] some of them being open and free software such as FullProf Suite, [ 125 ] [ 126 ] Jana2006, [ 127 ] MAUD, [ 128 ] [ 129 ] [ 130 ] Rietan, [ 131 ] GSAS, [ 132 ] etc. while others are available under commercial licenses such as Diffrac.Suite TOPAS, [ 133 ] Match!, [ 134 ] etc. Most of these tools also allow Le Bail refinement (also referred to as profile matching), that is, refinement of the cell parameters based on the Bragg peaks positions and peak profiles, without taking into account the crystallographic structure by itself. More recent tools allow the refinement of both structural and microstructural data, such as the FAULTS program included in the FullProf Suite, [ 135 ] which allows the refinement of structures with planar defects (e.g. stacking faults, twinnings, intergrowths).
Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules), the Inorganic Crystal Structure Database (ICSD) (for inorganic compounds) or the Protein Data Bank (for protein and sometimes nucleic acids). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins are not deposited in public crystallographic databases.
A number of women were pioneers in X-ray crystallography at a time when they were excluded from most other branches of physical science. [ 136 ]
Kathleen Lonsdale was a research student of William Henry Bragg , who had 11 women research students out of a total of 18. She is known for both her experimental and theoretical work. Lonsdale joined his crystallography research team at the Royal Institution in London in 1923, and after getting married and having children, went back to work with Bragg as a researcher. She confirmed the structure of the benzene ring, carried out studies of diamond, was one of the first two women to be elected to the Royal Society in 1945, and in 1949 was appointed the first female tenured professor of chemistry and head of the Department of crystallography at University College London . [ 137 ] Lonsdale always advocated greater participation of women in science and said in 1970: "Any country that wants to make full use of all its potential scientists and technologists could do so, but it must not expect to get the women quite so simply as it gets the men. ... It is utopian, then, to suggest that any country that really wants married women to return to a scientific career, when her children no longer need her physical presence, should make special arrangements to encourage her to do so?". [ 138 ] During this period, Lonsdale began a collaboration with William T. Astbury on a set of 230 space group tables which was published in 1924 and became an essential tool for crystallographers.
In 1932 Dorothy Hodgkin joined the laboratory of the physicist John Desmond Bernal, who was a former student of Bragg, in Cambridge, UK. She and Bernal took the first X-ray photographs of crystalline proteins. Hodgkin also played a role in the foundation of the International Union of Crystallography . She was awarded the Nobel Prize in Chemistry in 1964 for her work using X-ray techniques to study the structures of penicillin, insulin and vitamin B12. Her work on penicillin began in 1942 during the war and on vitamin B12 in 1948. While her group slowly grew, their predominant focus was on the X-ray analysis of natural products. She is the only British woman ever to have won a Nobel Prize in a science subject.
Rosalind Franklin took the X-ray photograph of a DNA fibre that proved key to James Watson and Francis Crick 's discovery of the double helix, for which they both won the Nobel Prize for Physiology or Medicine in 1962. Watson revealed in his autobiographic account of the discovery of the structure of DNA, The Double Helix , [ 139 ] that he had used Franklin's X-ray photograph without her permission. Franklin died of cancer in her 30s, before Watson received the Nobel Prize. Franklin also carried out important structural studies of carbon in coal and graphite, and of plant and animal viruses.
Isabella Karle of the United States Naval Research Laboratory developed an experimental approach to the mathematical theory of crystallography. Her work improved the speed and accuracy of chemical and biomedical analysis. Yet only her husband Jerome shared the 1985 Nobel Prize in Chemistry with Herbert Hauptman, "for outstanding achievements in the development of direct methods for the determination of crystal structures". Other prize-giving bodies have showered Isabella with awards in her own right.
Women have written many textbooks and research papers in the field of X-ray crystallography. For many years Lonsdale edited the International Tables for Crystallography , which provide information on crystal lattices, symmetry, and space groups, as well as mathematical, physical and chemical data on structures. Olga Kennard of the University of Cambridge , founded and ran the Cambridge Crystallographic Data Centre , an internationally recognized source of structural data on small molecules, from 1965 until 1997. Jenny Pickworth Glusker , a British scientist, co-authored Crystal Structure Analysis: A Primer , [ 140 ] first published in 1971 and as of 2010 in its third edition. Eleanor Dodson , an Australian-born biologist, who began as Dorothy Hodgkin's technician, was the main instigator behind CCP4 , the collaborative computing project that currently shares more than 250 software tools with protein crystallographers worldwide.
|
https://en.wikipedia.org/wiki/X-ray_crystallography
|
X-ray diffraction is a generic term for phenomena associated with changes in the direction of X-ray beams due to interactions with the electrons around atoms. It occurs due to elastic scattering , when there is no change in the energy of the waves. The resulting map of the directions of the X-rays far from the sample is called a diffraction pattern. It is different from X-ray crystallography which exploits X-ray diffraction to determine the arrangement of atoms in materials, and also has other components such as ways to map from experimental diffraction measurements to the positions of atoms.
This article provides an overview of X-ray diffraction, starting with the early history of x-rays and the discovery that they have the right spacings to be diffracted by crystals. In many cases these diffraction patterns can be Interpreted using a single scattering or kinematical theory with conservation of energy ( wave vector ). Many different types of X-ray sources exist, ranging from ones used in laboratories to higher brightness synchrotron light sources . Similar diffraction patterns can be produced by related scattering techniques such as electron diffraction or neutron diffraction . If single crystals of sufficient size cannot be obtained, various other X-ray methods can be applied to obtain less detailed information; such methods include fiber diffraction , powder diffraction and (if the sample is not crystallized) small-angle X-ray scattering (SAXS).
When Wilhelm Röntgen discovered X-rays in 1895 [ 1 ] physicists were uncertain of the nature of X-rays, but suspected that they were waves of electromagnetic radiation . The Maxwell theory of electromagnetic radiation was well accepted, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Barkla created the x-ray notation for sharp spectral lines, noting in 1909 two separate energies, at first, naming them "A" and "B" and, supposing that there may be lines prior to "A", he started an alphabet numbering beginning with "K." [ 2 ] [ 3 ] Single-slit experiments in the laboratory of Arnold Sommerfeld suggested that X-rays had a wavelength of about 1 angstrom . [ 4 ] X-rays are not only waves but also have particle properties causing Sommerfeld to coin the name Bremsstrahlung for the continuous spectra when they were formed when electrons bombarded a material. [ 3 ] Albert Einstein introduced the photon concept in 1905, [ 5 ] but it was not broadly accepted until 1922, [ 6 ] [ 7 ] when Arthur Compton confirmed it by the scattering of X-rays from electrons. [ 8 ] The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. [ 9 ] [ 10 ] [ 11 ] [ 12 ] Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 [ 13 ] confirmed that X-rays are a form of electromagnetic radiation.
The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light , since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed, and suggested that X-rays might have a wavelength comparable to the spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction pattern on a photographic plate . [ 14 ] : 43 After being developed, the plate showed rings of fuzzy spots of roughly elliptical shape. Despite the crude and unclear image, the image confirmed the diffraction concept. The results were presented to the Bavarian Academy of Sciences and Humanities in June 1912 as "Interferenz-Erscheinungen bei Röntgenstrahlen" (Interference phenomena in X-rays). [ 13 ] [ 15 ]
After seeing the initial results, Laue was walking home and suddenly conceived of the physical laws describing the effect. [ 14 ] : 44 Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914. [ 16 ]
After Von Laue's pioneering research the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg . In 1912–1913, the younger Bragg developed Bragg's law , which connects the scattering with evenly spaced planes within a crystal. [ 1 ] [ 17 ] [ 18 ] [ 19 ] The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple; as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated arrangements of atoms; see X-ray crystallography for more details.
Crystals are regular arrays of atoms, and X-rays are electromagnetic waves. Atoms scatter X-ray waves, primarily through the atoms' electrons. Just as an ocean wave striking a lighthouse produces secondary circular waves emanating from the lighthouse, so an X-ray striking an electron produces secondary spherical waves emanating from the electron. This phenomenon is known as elastic scattering , and the electron (or lighthouse) is known as the scatterer . A regular array of scatterers produces a regular array of spherical waves. Although these waves cancel one another out in most directions through destructive interference , they add constructively in a few specific directions. [ 20 ] [ 21 ] [ 22 ]
An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction . In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices ( h , k , l ), and their spacing by d . William Lawrence Bragg proposed a model where the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively ( constructive interference ) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple n of the X-ray wavelength λ.
A reflection is said to be indexed when its Miller indices (or, more correctly, its reciprocal lattice vector components) have been identified from the known wavelength and the scattering angle 2θ. Such indexing gives the unit-cell parameters , the lengths and angles of the unit-cell, as well as its space group . [ 20 ]
Each X-ray diffraction pattern represents a spherical slice of reciprocal space, as may be seen by the Ewald sphere construction. For a given incident wavevector k 0 the only wavevectors with the same energy lie on the surface of a sphere. In the diagram, the wavevector k 1 lies on the Ewald sphere and also is at a reciprocal lattice vector g 1 so satisfies Bragg's law. In contrast the wavevector k 2 differs from the reciprocal lattice point and g 2 by the vector s which is called the excitation error. For large single crystals primarily used in crystallography only the Bragg's law case matters; for electron diffraction and some other types of x-ray diffraction non-zero values of the excitation error also matter. [ 22 ]
X-ray scattering is determined by the density of electrons within the crystal. Since the energy of an X-ray is much greater than that of a valence electron, the scattering may be modeled as Thomson scattering , the elastic interaction of an electromagnetic ray with a charged particle.
The intensity of Thomson scattering for one particle with mass m and elementary charge q is: [ 21 ]
Hence the atomic nuclei, which are much heavier than an electron, contribute negligibly to the scattered X-rays. Consequently, the coherent scattering detected from an atom can be accurately approximated by analyzing the collective scattering from the electrons in the system. [ 20 ]
The incoming X-ray beam has a polarization and should be represented as a vector wave; however, for simplicity, it will be represented here as a scalar wave. We will ignore the time dependence of the wave and just concentrate on the wave's spatial dependence. Plane waves can be represented by a wave vector k in , and so the incoming wave at time t = 0 is given by
At a position r within the sample, consider a density of scatterers f ( r ); these scatterers produce a scattered spherical wave of amplitude proportional to the local amplitude of the incoming wave times the number of scatterers in a small volume dV about r
where S is the proportionality constant.
Consider the fraction of scattered waves that leave with an outgoing wave-vector of k out and strike a screen (detector) at r screen . Since no energy is lost (elastic, not inelastic scattering), the wavelengths are the same as are the magnitudes of the wave-vectors | k in | = | k out |. From the time that the photon is scattered at r until it is absorbed at r screen , the photon undergoes a change in phase
The net radiation arriving at r screen is the sum of all the scattered waves throughout the crystal
which may be written as a Fourier transform
where g = k out – k in is a reciprocal lattice vector that satisfies Bragg's law and the Ewald construction mentioned above. The measured intensity of the reflection will be square of this amplitude [ 20 ] [ 21 ]
The above assumes that the crystalline regions as somewhat large, for instance microns across, but also not so large that the X-rays are scattered more than once. If either of these is not the case then the diffracted intensities will be e more complicated. [ 22 ] [ 23 ]
Small scale diffraction experiments can be done with a local X-ray tube source, typically coupled with an image plate detector. These have the advantage of being relatively inexpensive and easy to maintain, and allow for quick screening and collection of samples. However, the wavelength of the X-rays produced is limited by the availability of different anode materials. Furthermore, the intensity is limited by the power applied and cooling capacity available to avoid melting the anode. In such systems, electrons are boiled off of a cathode and accelerated through a strong electric potential of ~50 kV ; having reached a high speed, the electrons collide with a metal plate, emitting bremsstrahlung and some strong spectral lines corresponding to the excitation of inner-shell electrons of the metal. The most common metal used is copper, which can be kept cool easily due to its high thermal conductivity , and which produces strong K α and K β lines. The K β line is sometimes suppressed with a thin (~10 μm) nickel foil. The simplest and cheapest variety of sealed X-ray tube has a stationary anode (the Crookes tube ) and runs with ~2 kW of electron beam power. The more expensive variety has a rotating-anode type source that runs with ~14 kW of e-beam power.
X-rays are generally filtered (by use of X-ray filters ) to a single wavelength (made monochromatic) and collimated to a single direction before they are allowed to strike the crystal. The filtering not only simplifies the data analysis, but also removes radiation that degrades the crystal without contributing useful information. Collimation is done either with a collimator (basically, a long tube) or with an arrangement of gently curved mirrors. Mirror systems are preferred for small crystals (under 0.3 mm) or with large unit cells (over 150 Å).
A more recent development is the microfocus tube , which can deliver at least as high a beam flux (after collimation) as rotating-anode sources but only require a beam power of a few tens or hundreds of watts rather than requiring several kilowatts.
Synchrotron radiation sources are some of the brightest light sources on earth and are some of the most powerful tools available for X-ray diffraction and crystallography. X-ray beams are generated in synchrotrons which accelerate electrically charged particles, often electrons, to nearly the speed of light and confine them in a (roughly) circular loop using magnetic fields.
Synchrotrons are generally national facilities, each with several dedicated beamlines where data is collected without interruption. Synchrotrons were originally designed for use by high-energy physicists studying subatomic particles and cosmic phenomena. The largest component of each synchrotron is its electron storage ring. This ring is not a perfect circle, but a many-sided polygon. At each corner of the polygon, or sector, precisely aligned magnets bend the electron stream. As the electrons' path is bent, they emit bursts of energy in the form of X-rays.
The intense ionizing radiation can cause radiation damage to samples, particularly macromolecular crystals. Cryo crystallography can protect the sample from radiation damage, by freezing the crystal at liquid nitrogen temperatures (~100 K ). [ 24 ] Cryocrystallography methods are applied to home source rotating anode sources as well. [ 25 ] However, synchrotron radiation frequently has the advantage of user-selectable wavelengths, allowing for anomalous scattering experiments which maximizes anomalous signal. This is critical in experiments such as single wavelength anomalous dispersion (SAD) and multi-wavelength anomalous dispersion (MAD).
Free-electron lasers have been developed for use in X-ray diffraction and crystallography. [ 26 ] These are the brightest X-ray sources currently available; with the X-rays coming in femtosecond bursts. The intensity of the source is such that atomic resolution diffraction patterns can be resolved for crystals otherwise too small for collection. However, the intense light source also destroys the sample, [ 27 ] requiring multiple crystals to be shot. As each crystal is randomly oriented in the beam, hundreds of thousands of individual diffraction images must be collected in order to get a complete data set. This method, serial femtosecond crystallography , has been used in solving the structure of a number of protein crystal structures, sometimes noting differences with equivalent structures collected from synchrotron sources. [ 28 ]
Other forms of elastic X-ray scattering besides single-crystal diffraction include powder diffraction , small-angle X-ray scattering ( SAXS ) and several types of X-ray fiber diffraction , which was used by Rosalind Franklin in determining the double-helix structure of DNA . In general, single-crystal X-ray diffraction offers more structural information than these other techniques; however, it requires a sufficiently large and regular crystal, which is not always available.
These scattering methods generally use monochromatic X-rays, which are restricted to a single wavelength with minor deviations. A broad spectrum of X-rays (that is, a blend of X-rays with different wavelengths) can also be used to carry out X-ray diffraction, a technique known as the Laue method. This is the method used in the original discovery of X-ray diffraction. Laue scattering provides much structural information with only a short exposure to the X-ray beam, and is therefore used in structural studies of very rapid events ( time resolved crystallography ). However, it is not as well-suited as monochromatic scattering for determining the full atomic structure of a crystal and therefore works better with crystals with relatively simple atomic arrangements.
The Laue back reflection mode records X-rays scattered backwards from a broad spectrum source. This is useful if the sample is too thick for X-rays to transmit through it. The diffracting planes in the crystal are determined by knowing that the normal to the diffracting plane bisects the angle between the incident beam and the diffracted beam. A Greninger chart can be used [ 29 ] to interpret the back reflection Laue photograph.
Because they interact via the Coulomb forces the scattering of electrons by matter is 1000 or more times stronger than for X-rays. Hence electron beams produce strong multiple or dynamical scattering even for relatively thin crystals (>10 nm). While there are similarities between the diffraction of X-rays and electrons, as can be found in the book by John M. Cowley , [ 22 ] the approach is different as it is based upon the original approach of Hans Bethe [ 30 ] and solving Schrödinger equation for relativistic electrons, rather than a kinematical or Bragg's law approach. Information about very small regions, down to single atoms is possible. The range of applications for electron diffraction , transmission electron microscopy and transmission electron crystallography with high energy electrons is extensive; see the relevant links for more information and citations. In addition to transmission methods, low-energy electron diffraction [ 31 ] is a technique where electrons are back-scattered off surfaces and has been extensively used to determine surface structures at the atomic scale, and reflection high-energy electron diffraction is another which is extensively used to monitor thin film growth. [ 32 ]
Neutron diffraction is used for structure determination, although it has been difficult to obtain intense, monochromatic beams of neutrons in sufficient quantities. Traditionally, nuclear reactors have been used, although sources producing neutrons by spallation are becoming increasingly available. Being uncharged, neutrons scatter more from the atomic nuclei rather than from the electrons. Therefore, neutron scattering is useful for observing the positions of light atoms with few electrons, especially hydrogen , which is essentially invisible in X-ray diffraction. Neutron scattering also has the property that the solvent can be made invisible by adjusting the ratio of normal water, H 2 O, and heavy water , D 2 O.
|
https://en.wikipedia.org/wiki/X-ray_diffraction
|
X-ray diffraction computed tomography is an experimental technique that combines X-ray diffraction with the computed tomography data acquisition approach. X-ray diffraction (XRD) computed tomography (CT) was first introduced in 1987 by Harding et al. [ 1 ] using a laboratory diffractometer and a monochromatic X-ray pencil beam . The first implementation of the technique at synchrotron facilities was performed in 1998 by Kleuker et al. [ 2 ]
X-ray diffraction computed tomography can be divided into two main categories depending on how the XRD data are being treated, specifically the XRD data can be treated either as powder diffraction or single crystal diffraction data and this depends on the sample properties. If the sample contains small and randomly oriented crystals, then it generates smooth powder diffraction "rings" when using a 2D area detector. If the sample contains large crystals, then it generates "spotty" 2D diffraction patterns. The latter can be performed using also a letterbox, cone and parallel X-ray beam and yields 2D or 3D images corresponding to maps of the crystallites or "grains" present in the sample and their properties, such as stress or strain . [ 3 ] There exist several variations of this approach including 3DXRD , [ 4 ] X-ray diffraction contrast tomography (DCT) [ 5 ] and high energy X-ray diffraction microscopy (HEDM) [ 6 ]
X-ray diffraction computed tomography, often abbreviated as XRD-CT, typically refers to the technique invented by Harding et al. [ 1 ] which assumes that the acquired data are powder diffraction data. For this reason, it has also been mentioned as powder diffraction computed tomography [ 7 ] and diffraction scattering computed tomography (DSCT), [ 8 ] however they both refer to the same method.
XRD-CT employs a monochromatic pencil beam scanning approach and captures the diffraction signal in transmission geometry, producing a diffraction projection dataset. In this setup, the sample moves along an axis perpendicular to the beam's direction. It is illuminated with a monochromatic finely collimated or focused "pencil" X-ray beam. A 2D area detector then records the scattered X-rays, optimizing for best counting statistics and speed. Typically, the translational scan's size surpasses the sample's diameter, ensuring its full coverage at all assessed angles. The size of the translation step is commonly aligned with the X-ray beam's horizontal size. In a perfect scenario for any pencil-beam scanning tomographic method, the measured angles should match the number of translation steps multiplied by π/2, adhering to the Nyquist sampling theorem . However, this number can often be reduced in practice be equal to the number of translation steps without substantially compromising the quality of reconstructed images. The usual angular range spans from 0 to π.
In most studies, the predominant data reconstruction approach is the 'reverse analysis' introduced by Bleuet et al. [ 9 ] where each sinogram is treated independently yielding a new CT image. Most often the filtered back projection reconstruction algorithm [ 10 ] is employed to reconstruct the XRD-CT images. The outcome is an image in which every pixel, or more accurately voxel , equates to a local diffraction pattern. The reconstructed data can also be seen as a stack of 2D square images, where each image corresponds to an X-ray scattering angle.
XRD-CT makes the following assumptions:
In practise, one or more of these assumptions are not valid and the data suffer from artefacts. There are strategies to remove or significantly all of these artefacts:
Analyzing the local diffraction patterns can range from basic single-peak sequential batch fitting to a comprehensive one-step full-profile analysis, known as ' Rietveld -CT' (Wragg et al., 2015 [ 15 ] ). The latter method stands out for its efficiency over the typical sequential method since it shares global parameters across all local models. Examples of these parameters include zero error and instrumental broadening, which enhance the refinement process's stability. To elaborate, each voxel in the restructured images is made up of a local model (like multi-phase scale factors, lattice parameters, and crystallite sizes) tailored to match the corresponding local diffraction pattern. This implies that only the overarching parameters are consistent across local models. However, the application of Rietveld-CT has been limited to small images, specifically those of 60 × 60 voxels, with the feasibility for larger images hinging on the computer memory available. Most often though full profile analysis of the local diffraction patterns is performed on a pixel-by-pixel or line-by-line basis using conventional XRD data analysis methods, such LeBail , Pawley and Rietveld . All these methods employ fitting based on the restructured diffraction patterns. Another approach which is also computational expensive is the DLSR which performs the tomographic data reconstruction and peak fitting in a single step. [ 11 ] Regardless of the chosen analytical method, the final output comprises images filled with localized physico-chemical information. Each physico-chemical image corresponds to the refined parameters present in the local models, which might include maps that correspond to scale factors, lattice parameters , and crystallite sizes.
|
https://en.wikipedia.org/wiki/X-ray_diffraction_computed_tomography
|
In astronomy , an X-ray flash is a transient emission of X-rays originating in a distant galaxy , probably caused by a hypernova . They have been observed to last 90 to 200 seconds. [ 1 ]
Nearly all hypernovae are detected via (higher-energy) gamma-ray photons, at distances too great for any associated X-ray emissions from them to be observed; nevertheless, the two main theories of the nature of an X-ray flash each assume that a hypernova is involved: [ 1 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/X-ray_flash_(astronomy)
|
X-ray fluorescence ( XRF ) is the emission of characteristic "secondary" (or fluorescent) X-rays from a material that has been excited by being bombarded with high-energy X-rays or gamma rays . The phenomenon is widely used for elemental analysis and chemical analysis , particularly in the investigation of metals , glass , ceramics and building materials, and for research in geochemistry , forensic science , archaeology and art objects [ 1 ] such as paintings . [ 2 ] [ 3 ]
When materials are exposed to short- wavelength X-rays or to gamma rays, ionization of their component atoms may take place. Ionization consists of the ejection of one or more electrons from the atom, and may occur if the atom is exposed to radiation with an energy greater than its ionization energy . X-rays and gamma rays can be energetic enough to expel tightly held electrons from the inner orbitals of the atom. The removal of an electron in this way makes the electronic structure of the atom unstable, and electrons in higher orbitals "fall" into the lower orbital to fill the hole left behind. In falling, energy is released in the form of a photon, the energy of which is equal to the energy difference of the two orbitals involved. Thus, the material emits radiation, which has energy characteristic of the atoms present. The term fluorescence is applied to phenomena in which the absorption of radiation of a specific energy results in the re-emission of radiation of a different energy (generally lower).
Each element has electronic orbitals of characteristic energy. Following removal of an inner electron by an energetic photon provided by a primary radiation source, an electron from an outer shell drops into its place. There are a limited number of ways in which this can happen, as shown in Figure 1. The main transitions are given names : an L→K transition is traditionally called K α , an M→K transition is called K β , an M→L transition is called L α , and so on. Each of these transitions yields a fluorescent photon with a characteristic energy equal to the difference in energy of the initial and final orbital. The wavelength of this fluorescent radiation can be calculated from Planck's Law :
The fluorescent radiation can be analysed either by sorting the energies of the photons ( energy-dispersive analysis) or by separating the wavelengths of the radiation ( wavelength-dispersive analysis). Once sorted, the intensity of each characteristic radiation is directly related to the amount of each element in the material. This is the basis of a powerful technique in analytical chemistry . Figure 2 shows the typical form of the sharp fluorescent spectral lines obtained in the wavelength-dispersive method (see Moseley's law ).
In order to excite the atoms, a source of radiation is required, with sufficient energy to expel tightly held inner electrons. Conventional X-ray generators , based on electron bombardment of a heavy metal (i.e. tungsten or rhodium ) target are most commonly used, because their output can readily be "tuned" for the application, and because higher power can be deployed relative to other techniques. X-ray generators in the range 20–60 kV are used, which allow excitation of a broad range of atoms. The continuous spectrum consists of " bremsstrahlung " radiation: radiation produced when high-energy electrons passing through the tube are progressively decelerated by the material of the tube anode (the "target"). A typical tube output spectrum is shown in Figure 3.
For portable XRF spectrometers, copper target is usually bombared with high energy electrons, that are produced either by impact laser or by pyroelectric crystals. [ 4 ] [ 5 ]
Alternatively, gamma ray sources, based on radioactive isotopes (such as 109 Cd, 57 Co, 55 Fe, 238 Pu and 241 Am) can be used without the need for an elaborate power supply, allowing for easier use in small, portable instruments. [ 6 ]
When the energy source is a synchrotron or the X-rays are focused by an optic like a polycapillary , the X-ray beam can be very small and very intense. As a result, atomic information on the sub-micrometer scale can be obtained.
In energy-dispersive analysis, the fluorescent X-rays emitted by the material sample are directed into a solid-state detector which produces a "continuous" distribution of pulses, the voltages of which are proportional to the incoming photon energies. This signal is processed by a multichannel analyzer (MCA) which produces an accumulating digital spectrum that can be processed to obtain analytical data.
In wavelength-dispersive analysis, the fluorescent X-rays emitted by the sample are directed into a diffraction grating -based monochromator . The diffraction grating used is usually a single crystal. By varying the angle of incidence and take-off on the crystal, a small X-ray wavelength range can be selected. The wavelength obtained is given by Bragg's law :
where d is the spacing of atomic layers parallel to the crystal surface.
In energy-dispersive analysis, dispersion and detection are a single operation, as already mentioned above. Proportional counters or various types of solid-state detectors ( PIN diode , Si(Li), Ge(Li), silicon drift detector SDD) are used. They all share the same detection principle: An incoming X-ray photon ionizes a large number of detector atoms with the amount of charge produced being proportional to the energy of the incoming photon. The charge is then collected and the process repeats itself for the next photon. Detector speed is obviously critical, as all charge carriers measured have to come from the same photon to measure the photon energy correctly (peak length discrimination is used to eliminate events that seem to have been produced by two X-ray photons arriving almost simultaneously). The spectrum is then built up by dividing the energy spectrum into discrete bins and counting the number of pulses registered within each energy bin. EDXRF detector types vary in resolution, speed and the means of cooling (a low number of free charge carriers is critical in the solid state detectors): proportional counters with resolutions of several hundred eV cover the low end of the performance spectrum, followed by PIN diode detectors, while the Si(Li), Ge(Li) and SDDs occupy the high end of the performance scale.
In wavelength-dispersive analysis, the single-wavelength radiation produced by the monochromator is passed into a chamber containing a gas that is ionized by the X-ray photons. A central electrode is charged at (typically) +1700 V with respect to the conducting chamber walls, and each photon triggers a pulse-like cascade of current across this field. The signal is amplified and transformed into an accumulating digital count. These counts are then processed to obtain analytical data.
The fluorescence process is inefficient, and the secondary radiation is much weaker than the primary beam. Furthermore, the secondary radiation from lighter elements is of relatively low energy (long wavelength) and has low penetrating power, and is severely attenuated if the beam passes through air for any distance. Because of this, for high-performance analysis, the path from tube to sample to detector is maintained under vacuum (around 10 Pa residual pressure). This means in practice that most of the working parts of the instrument have to be located in a large vacuum chamber. The problems of maintaining moving parts in vacuum, and of rapidly introducing and withdrawing the sample without losing vacuum, pose major challenges for the design of the instrument. For less demanding applications, or when the sample is damaged by a vacuum (e.g. a volatile sample), a helium-swept X-ray chamber can be substituted, with some loss of low-Z (Z = atomic number ) intensities.
The use of a primary X-ray beam to excite fluorescent radiation from the sample was first proposed by Glocker and Schreiber in 1928. [ 7 ] Today, the method is used as a non-destructive analytical technique, and as a process control tool in many extractive and processing industries. In principle, the lightest element that can be analysed is beryllium (Z = 4), but due to instrumental limitations and low X-ray yields for the light elements, it is often difficult to quantify elements lighter than sodium (Z = 11), unless background corrections and very comprehensive inter-element corrections are made.
In energy-dispersive spectrometers (EDX or EDS), the detector allows the determination of the energy of the photon when it is detected. Detectors historically have been based on silicon semiconductors, in the form of lithium-drifted silicon crystals, or high-purity silicon wafers.
These consist essentially of a 3–5 mm thick silicon junction type p-i-n diode (same as PIN diode) with a bias of −1000 V across it. The lithium-drifted centre part forms the non-conducting i-layer, where Li compensates the residual acceptors which would otherwise make the layer p-type. When an X-ray photon passes through, it causes a swarm of electron-hole pairs to form, and this causes a voltage pulse. To obtain sufficiently low conductivity, the detector must be maintained at low temperature, and liquid-nitrogen cooling must be used for the best resolution. With some loss of resolution, the much more convenient Peltier cooling can be employed. [ 8 ]
More recently, high-purity silicon wafers with low conductivity have become routinely available. Cooled by the Peltier effect , this provides a cheap and convenient detector, although the liquid nitrogen cooled Si(Li) detector still has the best resolution (i.e. ability to distinguish different photon energies).
The pulses generated by the detector are processed by pulse-shaping amplifiers. It takes time for the amplifier to shape the pulse for optimum resolution, and there is therefore a trade-off between resolution and count-rate: long processing time for good resolution results in "pulse pile-up" in which the pulses from successive photons overlap. Multi-photon events are, however, typically more drawn out in time (photons did not arrive exactly at the same time) than single photon events and pulse-length discrimination can thus be used to filter most of these out. Even so, a small number of pile-up peaks will remain and pile-up correction should be built into the software in applications that require trace analysis. To make the most efficient use of the detector, the tube current should be reduced to keep multi-photon events (before discrimination) at a reasonable level, e.g. 5–20%.
Considerable computer power is dedicated to correcting for pulse-pile up and for extraction of data from poorly resolved spectra. These elaborate correction processes tend to be based on empirical relationships that may change with time, so that continuous vigilance is required in order to obtain chemical data of adequate precision.
Digital pulse processors are widely used in high performance nuclear instrumentation. They are able to effectively reduce pile-up and base line shifts, allowing for easier processing. A low pass filter is integrated, improving the signal to noise ratio. The Digital Pulse Processor requires a significant amount of energy to run, but it provides precise results.
EDX spectrometers are different from WDX spectrometers in that they are smaller, simpler in design and have fewer engineered parts, however the accuracy and resolution of EDX spectrometers are lower than for WDX. EDX spectrometers can also use miniature X-ray tubes or gamma sources, which makes them cheaper and allows miniaturization and portability. This type of instrument is commonly used for portable quality control screening applications, such as testing toys for lead (Pb) content, sorting scrap metals, and measuring the lead content of residential paint. On the other hand, the low resolution and problems with low count rate and long dead-time makes them inferior for high-precision analysis. They are, however, very effective for high-speed, multi-elemental analysis. Field Portable XRF analysers currently on the market weigh less than 2 kg, and have limits of detection on the order of 2 parts per million of lead (Pb) in pure sand. Using a Scanning Electron Microscope and using EDX, studies have been broadened to organic based samples such as biological samples and polymers.
In wavelength dispersive spectrometers ( WDX or WDS ), the photons are separated by diffraction on a single crystal before being detected. Although wavelength dispersive spectrometers are occasionally used to scan a wide range of wavelengths, producing a spectrum plot as in EDS, they are usually set up to make measurements only at the wavelength of the emission lines of the elements of interest. This is achieved in two different ways:
In order to keep the geometry of the tube-sample-detector assembly constant, the sample is normally prepared as a flat disc, typically of diameter 20–50 mm. This is located at a standardized, small distance from the tube window. Because the X-ray intensity follows an inverse-square law, the tolerances for this placement and for the flatness of the surface must be very tight in order to maintain a repeatable X-ray flux. Ways of obtaining sample discs vary: metals may be machined to shape, minerals may be finely ground and pressed into a tablet, and glasses may be cast to the required shape. A further reason for obtaining a flat and representative sample surface is that the secondary X-rays from lighter elements often only emit from the top few micrometres of the sample. In order to further reduce the effect of surface irregularities, the sample is usually spun at 5–20 rpm. It is necessary to ensure that the sample is sufficiently thick to absorb the entire primary beam. For higher-Z materials, a few millimetres thickness is adequate, but for a light-element matrix such as coal, a thickness of 30–40 mm is needed.
The common feature of monochromators is the maintenance of a symmetrical geometry between the sample, the crystal and the detector. In this geometry the Bragg diffraction condition is obtained.
The X-ray emission lines are very narrow (see figure 2), so the angles must be defined with considerable precision. This is achieved in two ways:
A Söller collimator is a stack of parallel metal plates, spaced a few tenths of a millimeter apart. To improve angular resolution, one must lengthen the collimator, and/or reduce the plate spacing. This arrangement has the advantage of simplicity and relatively low cost, but the collimators reduce intensity and increase scattering, and reduce the area of sample and crystal that can be "seen". The simplicity of the geometry is especially useful for variable-geometry monochromators.
The Rowland circle geometry ensures that the slits are both in focus, but in order for the Bragg condition to be met at all points, the crystal must first be bent to a radius of 2R (where R is the radius of the Rowland circle), then ground to a radius of R. This arrangement allows higher intensities (typically 8-fold) with higher resolution (typically 4-fold) and lower background. However, the mechanics of keeping Rowland circle geometry in a variable-angle monochromator is extremely difficult. In the case of fixed-angle monochromators (for use in simultaneous spectrometers), crystals bent to a logarithmic spiral shape give the best focusing performance. The manufacture of curved crystals to acceptable tolerances increases their price considerably.
An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction . In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice.
The orientation of a particular set of sheets is identified by its three Miller indices ( h , k , l ), and let their spacing be noted by d .
William Lawrence Bragg proposed a model in which the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively ( constructive interference ) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple n of the X-ray wavelength λ.(Fig.7)
The desirable characteristics of a diffraction crystal are: [ citation needed ]
Crystals with simple structures tend to give the best diffraction performance. Crystals containing heavy atoms can diffract well, but also fluoresce more in the higher energy region, causing interference. Crystals that are water-soluble, volatile or organic tend to give poor stability.
Commonly used crystal materials include LiF ( lithium fluoride ), ADP ( ammonium dihydrogen phosphate ), Ge ( germanium ), Si ( silicon ), graphite , InSb ( indium antimonide ), PE ( tetrakis -(hydroxymethyl)-methane, also known as pentaerythritol ), KAP ( potassium hydrogen phthalate ), RbAP (rubidium hydrogen phthalate) and TlAP (thallium(I) hydrogen phthalate). In addition, there is an increasing use of "layered synthetic microstructures" (LSMs), which are "sandwich" structured materials comprising successive thick layers of low atomic number matrix, and monatomic layers of a heavy element. These can in principle be custom-manufactured to diffract any desired long wavelength, and are used extensively for elements in the range Li to Mg.
In scientific methods that use X-ray/neutron or electron diffraction the before mentioned planes of a diffraction can be doubled to display higher order reflections. The given planes, resulting from Miller indices, can be calculated for a single crystal. The resulting values for h, k and l are then called Laue indices .
So a single crystal can be variable in the way, that many reflection configurations of that crystal can be used to reflect different energy ranges.
The Germanium (Ge111) crystal, for example, can also be used as a Ge333, Ge444 and more.
For that reason the corresponding indices used for a particular experimental setup always get noted behind the crystal material(e.g. Ge111, Ge444)
Notice, that the Ge222 configuration is forbidden due to diffraction rules stating, that all allowed reflections must be with all odd or all even Miller indices that, combined, result in 4 n {\displaystyle 4n} ,
where n {\displaystyle n} is the order of reflection.
The spectral lines used for elemental analysis of chemicals are selected on the basis of intensity, accessibility by the instrument, and lack of line overlaps. Typical lines used, and their wavelengths, are as follows:
Other lines are often used, depending on the type of sample and equipment available.
X-ray diffraction (XRD) is still the most used method for structural analysis of chemical compounds. Yet, with increasing detail on the relation of K β {\displaystyle K_{\beta }} -line spectra and the surrounding chemical environment of the ionized metal atom, measurements of the so-called valence-to-core (V2C) energy region become increasingly viable.
Scientists noted that after ionization of 3d-transition metal atom, the K β {\displaystyle K_{\beta }} -line intensities and energies shift
with oxidation state of the metal and with the species of ligand(s). Spin states in a compound tend to affect this kind of measurement. [ 9 ]
This means, that by intense study of these spectral lines, one can obtain several crucial pieces of information from a sample. Especially, if there are references that have been studied in detail and can be used to make out differences. The information collected from this kind of measurement include:
These measurements are mostly done at synchrotron facilities, although a number of so-called "in-lab"-spectrometers have been developed and used for pre-beamtime (time at a synchrotron) measurements. [ 10 ] [ 11 ]
Detectors used for wavelength dispersive spectrometry need to have high pulse processing speeds in order to cope with the very high photon count rates that can be obtained. In addition, they need sufficient energy resolution to allow filtering-out of background noise and spurious photons from the primary beam or from crystal fluorescence. There are four common types of detector:
Gas flow proportional counters are used mainly for detection of longer wavelengths. Gas flows through it continuously. Where there are multiple detectors, the gas is passed through them in series, then led to waste. The gas is usually 90% argon, 10% methane ("P10"), although the argon may be replaced with neon or helium where very long wavelengths (over 5 nm) are to be detected. The argon is ionised by incoming X-ray photons, and the electric field multiplies this charge into a measurable pulse. The methane suppresses the formation of fluorescent photons caused by recombination of the argon ions with stray electrons. The anode wire is typically tungsten or nichrome of 20–60 μm diameter. Since the pulse strength obtained is essentially proportional to the ratio of the detector chamber diameter to the wire diameter, a fine wire is needed, but it must also be strong enough to be maintained under tension so that it remains precisely straight and concentric with the detector. The window needs to be conductive, thin enough to transmit the X-rays effectively, but thick and strong enough to minimize diffusion of the detector gas into the high vacuum of the monochromator chamber. Materials often used are beryllium metal, aluminised PET film and aluminised polypropylene . Ultra-thin windows (down to 1 μm) for use with low-penetration long wavelengths are very expensive. The pulses are sorted electronically by "pulse height selection" in order to isolate those pulses deriving from the secondary X-ray photons being counted.
Sealed gas detectors are similar to the gas flow proportional counter, except that the gas does not flow through it. The gas is usually krypton or xenon at a few atmospheres pressure. They are applied usually to wavelengths in the 0.15–0.6 nm range. They are applicable in principle to longer wavelengths, but are limited by the problem of manufacturing a thin window capable of withstanding the high pressure difference.
Scintillation counters consist of a scintillating crystal (typically of sodium iodide doped with thallium) attached to a photomultiplier. The crystal produces a group of scintillations for each photon absorbed, the number being proportional to the photon energy. This translates into a pulse from the photomultiplier of voltage proportional to the photon energy. The crystal must be protected with a relatively thick aluminium/beryllium foil window, which limits the use of the detector to wavelengths below 0.25 nm. Scintillation counters are often connected in series with a gas flow proportional counter: the latter is provided with an outlet window opposite the inlet, to which the scintillation counter is attached. This arrangement is particularly used in sequential spectrometers.
Semiconductor detectors can be used in theory, and their applications are increasing as their technology improves, but historically their use for WDX has been restricted by their slow response (see EDX).
At first sight, the translation of X-ray photon count-rates into elemental concentrations would appear to be straightforward: WDX separates the X-ray lines efficiently, and the rate of generation of secondary photons is proportional to the element concentration. However, the number of photons leaving the sample is also affected by the physical properties of the sample: so-called " matrix effects ". These fall broadly into three categories:
All elements absorb X-rays to some extent. Each element has a characteristic absorption spectrum which consists of a "saw-tooth" succession of fringes, each step-change of which has wavelength close to an emission line of the element. Absorption attenuates the secondary X-rays leaving the sample. For example, the mass absorption coefficient of silicon at the wavelength of the aluminium Kα line is 50 m 2 /kg, whereas that of iron is 377 m 2 /kg. This means that fluorescent X-rays generated by a given concentration of aluminium in a matrix of iron are absorbed about seven times more (that is 377/50) compared with the fluorescent X-rays generated by the same concentration of aluminium, but in a silicon matrix. That would lead to about one seventh of the count rate, once the X-rays are detected. Fortunately, mass absorption coefficients are well known and can be calculated. However, to calculate the absorption for a multi-element sample, the composition must be known. For analysis of an unknown sample, an iterative procedure is therefore used. To derive the mass absorption accurately, data for the concentration of elements not measured by XRF may be needed, and various strategies are employed to estimate these. As an example, in cement analysis, the concentration of oxygen (which is not measured) is calculated by assuming that all other elements are present as standard oxides.
Enhancement occurs where the secondary X-rays emitted by a heavier element are sufficiently energetic to stimulate additional secondary emission from a lighter element. This phenomenon can also be modelled, and corrections can be made provided that the full matrix composition can be deduced.
Sample macroscopic effects consist of effects of inhomogeneities of the sample, and unrepresentative conditions at its surface. Samples are ideally homogeneous and isotropic, but they often deviate from this ideal. Mixtures of multiple crystalline components in mineral powders can result in absorption effects that deviate from those calculable from theory. When a powder is pressed into a tablet, the finer minerals concentrate at the surface. Spherical grains tend to migrate to the surface more than do angular grains. In machined metals, the softer components of an alloy tend to smear across the surface. Considerable care and ingenuity are required to minimize these effects. Because they are artifacts of the method of sample preparation, these effects can not be compensated by theoretical corrections, and must be "calibrated in". This means that the calibration materials and the unknowns must be compositionally and mechanically similar, and a given calibration is applicable only to a limited range of materials. Glasses most closely approach the ideal of homogeneity and isotropy, and for accurate work, minerals are usually prepared by dissolving them in a borate glass, and casting them into a flat disc or "bead". Prepared in this form, a virtually universal calibration is applicable.
Further corrections that are often employed include background correction and line overlap correction. The background signal in an XRF spectrum derives primarily from scattering of primary beam photons by the sample surface. Scattering varies with the sample mass absorption, being greatest when mean atomic number is low. When measuring trace amounts of an element, or when measuring on a variable light matrix, background correction becomes necessary. This is really only feasible on a sequential spectrometer. Line overlap is a common problem, bearing in mind that the spectrum of a complex mineral can contain several hundred measurable lines. Sometimes it can be overcome by measuring a less-intense, but overlap-free line, but in certain instances a correction is inevitable. For instance, the Kα is the only usable line for measuring sodium, and it overlaps the zinc Lβ (L 2 -M 4 ) line. Thus zinc, if present, must be analysed in order to properly correct the sodium value.
It is also possible to create a characteristic secondary X-ray emission using other incident radiation to excite the sample:
When radiated by an X-ray beam, the sample also emits other radiations that can be used for analysis:
The de-excitation also ejects Auger electrons , but Auger electron spectroscopy (AES) normally uses an electron beam as the probe.
Confocal microscopy X-ray fluorescence imaging is a newer technique that allows control over depth, in addition to horizontal and vertical aiming, for example, when analysing buried layers in a painting. [ 12 ]
A 2001 review, [ 13 ] addresses the application of portable instrumentation from QA / QC perspectives. It provides a guide to the development of a set of SOPs if regulatory compliance guidelines are not available.
|
https://en.wikipedia.org/wiki/X-ray_fluorescence
|
X-ray fluorescence holography ( XFH ) is a holography method with atomic resolution based on atomic fluorescence . [ 1 ] It is a relatively new technique that benefits greatly from the coherent high-power X-rays available from synchrotron sources, such as the Japanese SPring-8 facility.
Fluorescent X-rays are scattered by atoms in a sample and provide the object wave, which is referenced to non-scattered X-rays. A holographic pattern is recorded by scanning a detector around the sample, which allows researchers to investigate the local 3D structure around a specific element in a sample. [ 2 ] [ 3 ]
It is useful for investigating the effects of irradiation on high temperature superconductors . [ citation needed ]
One of the criticisms for this method is that it suffers from twin images. D. Gabor. Barton proposed that reconstructed phased images of holograms will suppress twin images effects. [ 4 ]
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/X-ray_fluorescence_holography
|
In radiography , X-ray microtomography uses X-rays to create cross-sections of a physical object that can be used to recreate a virtual model ( 3D model ) without destroying the original object. It is similar to tomography and X-ray computed tomography . The prefix micro- (symbol: μ) is used to indicate that the pixel sizes of the cross-sections are in the micrometre range. [ 2 ] These pixel sizes have also resulted in creation of its synonyms high-resolution X-ray tomography , micro-computed tomography ( micro-CT or μCT ), and similar terms. Sometimes the terms high-resolution computed tomography (HRCT) and micro-CT are differentiated, [ 3 ] but in other cases the term high-resolution micro-CT is used. [ 4 ] Virtually all tomography today is computed tomography.
Micro-CT has applications both in medical imaging and in industrial computed tomography . In general, there are two types of scanner setups. In one setup, the X-ray source and detector are typically stationary during the scan while the sample/animal rotates. The second setup, much more like a clinical CT scanner, is gantry based where the animal/specimen is stationary in space while the X-ray tube and detector rotate around. These scanners are typically used for small animals ( in vivo scanners), biomedical samples, foods, microfossils, and other studies for which minute detail is desired.
The first X-ray microtomography system was conceived and built by Jim Elliott in the early 1980s. The first published X-ray microtomographic images were reconstructed slices of a small tropical snail, with pixel size about 50 micrometers. [ 5 ]
The fan-beam system is based on a one-dimensional (1D) X-ray detector and an electronic X-ray source, creating 2D cross-sections of the object. Typically used in human computed tomography systems.
The cone-beam system is based on a 2D X-ray detector ( camera ) and an electronic X-ray source, creating projection images that later will be used to reconstruct the image cross-sections.
In an open system, X-rays may escape or leak out, thus the operator must stay behind a shield, have special protective clothing, or operate the scanner from a distance or a different room. Typical examples of these scanners are the human versions, or designed for big objects.
In a closed system, X-ray shielding is put around the scanner so the operator can put the scanner on a desk or special table. Although the scanner is shielded, care must be taken and the operator usually carries a dosimeter, since X-rays have a tendency to be absorbed by metal and then re-emitted like an antenna. Although a typical scanner will produce a relatively harmless volume of X-rays, repeated scannings in a short timeframe could pose a danger. Digital detectors with small pixel pitches and micro-focus x-ray tubes are usually employed to yield in high resolution images. [ 6 ]
Closed systems tend to become very heavy because lead is used to shield the X-rays. Therefore, the smaller scanners only have a small space for samples.
Because microtomography scanners offer isotropic , or near isotropic, resolution, display of images does not need to be restricted to the conventional axial images. Instead, it is possible for a software program to build a volume by 'stacking' the individual slices one on top of the other. The program may then display the volume in an alternative manner. [ 7 ]
For X-ray microtomography, powerful open source software is available, such as the ASTRA toolbox. [ 8 ] [ 9 ] The ASTRA Toolbox is a MATLAB and python toolbox of high-performance GPU primitives for 2D and 3D tomography, from 2009 to 2014 developed by iMinds-Vision Lab , University of Antwerp and since 2014 jointly developed by iMinds-VisionLab, UAntwerpen and CWI, Amsterdam. The toolbox supports parallel, fan, and cone beam, with highly flexible source/detector positioning. A large number of reconstruction algorithms are available, including FBP, ART, SIRT, SART, CGLS. [ 10 ]
For 3D visualization, tomviz is a popular open-source tool for tomography. [ citation needed ]
Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set, as produced by a microtomography scanner. Usually these are acquired in a regular pattern, e.g., one slice every millimeter, and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.
Where different structures have similar threshold density, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation , a manual or automatic procedure that can remove the unwanted structures from the image. [ 11 ] [ 12 ]
Developmental biology
In geology it is used to analyze micro pores in the reservoir rocks, [ 32 ] [ 33 ] it can be used in microfacies analysis for sequence stratigraphy. In petroleum exploration it is used to model the petroleum flow under micro pores and nano particles.
It can give a resolution up to 1 nm.
|
https://en.wikipedia.org/wiki/X-ray_microtomography
|
X-ray photoelectron spectroscopy ( XPS ) is a surface-sensitive quantitative spectroscopic technique that measures the very topmost 50-60 atoms, 5-10 nm of any surface. It belongs to the family of photoemission spectroscopies in which electron population spectra are obtained by irradiating a material with a beam of X-rays . XPS is based on the photoelectric effect that can identify the elements that exist within a material (elemental composition) or are covering its surface, as well as their chemical state , and the overall electronic structure and density of the electronic states in the material. XPS is a powerful measurement technique because it not only shows what elements are present, but also what other elements they are bonded to. The technique can be used in line profiling of the elemental composition across the surface, or in depth profiling when paired with ion-beam etching . It is often applied to study chemical processes in the materials in their as-received state or after cleavage, scraping, exposure to heat, reactive gasses or solutions, ultraviolet light, or during ion implantation .
Chemical states are inferred from the measurement of the kinetic energy and the number of the ejected electrons . XPS requires high vacuum (residual gas pressure p ~ 10 −6 Pa) or ultra-high vacuum (p < 10 −7 Pa) conditions, although a current area of development is ambient-pressure XPS, in which samples are analyzed at pressures of a few tens of millibar.
When laboratory X-ray sources are used, XPS easily detects all elements except hydrogen and helium . The detection limit is in the parts per thousand range, but parts per million (ppm) are achievable with long collection times and concentration at top surface.
XPS is routinely used to analyze inorganic compounds , metal alloys , polymers , elements , catalysts , glasses , ceramics , paints , papers , inks , woods , plant parts, make-up , teeth , bones , medical implants , bio-materials, [ 1 ] coatings , viscous oils , glues , ion-modified materials and many others. Somewhat less routinely XPS is used to analyze the hydrated forms of materials such as hydrogels and biological samples by freezing them in their hydrated state in an ultrapure environment, and allowing multilayers of ice to sublime away prior to analysis.
Because the energy of an X-ray with particular wavelength is known (for Al K α X-rays, E photon = 1486.7 eV), and because the emitted electrons' kinetic energies are measured, the electron binding energy of each of the emitted electrons can be determined by using the photoelectric effect equation,
where E binding is the binding energy of the electron measured relative to the chemical potential, E photon is the energy of the X-ray photons being used, E kinetic is the kinetic energy of the electron as measured by the instrument and ϕ {\displaystyle \phi } is a work function -like term for the specific surface of the material, which in real measurements includes a small correction by the instrument's work function because of the contact potential . This equation is essentially a conservation of energy equation. The work function-like term ϕ {\displaystyle \phi } can be thought of as an adjustable instrumental correction factor that accounts for the few eV of kinetic energy given up by the photoelectron as it gets emitted from the bulk and absorbed by the detector. It is a constant that rarely needs to be adjusted in practice.
In 1887, Heinrich Rudolf Hertz discovered but could not explain the photoelectric effect , which was later explained in 1905 by Albert Einstein ( Nobel Prize in Physics 1921). Two years after Einstein's publication, in 1907, P.D. Innes experimented with a Röntgen tube, Helmholtz coils , a magnetic field hemisphere (an electron kinetic energy analyzer), and photographic plates, to record broad bands of emitted electrons as a function of velocity, in effect recording the first XPS spectrum. Other researchers, including Henry Moseley , Rawlinson and Robinson, independently performed various experiments to sort out the details in the broad bands. [ citation needed ] After WWII , Kai Siegbahn and his research group in Uppsala ( Sweden ) developed several significant improvements in the equipment, and in 1954 recorded the first high-energy-resolution XPS spectrum of cleaved sodium chloride (NaCl), revealing the potential of XPS. [ 2 ] A few years later in 1967, Siegbahn published a comprehensive study of XPS, bringing instant recognition of the utility of XPS and also the first hard X-ray photoemission experiments, which he referred to as Electron Spectroscopy for Chemical Analysis (ESCA). [ 3 ] In cooperation with Siegbahn, a small group of engineers (Mike Kelly, Charles Bryson, Lavier Faye, Robert Chaney) at Hewlett-Packard in the US, produced the first commercial monochromatic XPS instrument in 1969. Siegbahn received the Nobel Prize for Physics in 1981, to acknowledge his extensive efforts to develop XPS into a useful analytical tool. [ 4 ] In parallel with Siegbahn's work, David Turner at Imperial College London (and later at Oxford University ) developed ultraviolet photoelectron spectroscopy (UPS) for molecular species using helium lamps. [ 5 ]
A typical XPS spectrum is a plot of the number of electrons detected at a specific binding energy . Each element produces a set of characteristic XPS peaks. These peaks correspond to the electron configuration of the electrons within the atoms, e.g., 1 s , 2 s , 2 p , 3 s , etc. The number of detected electrons in each peak is directly related to the amount of element within the XPS sampling volume. To generate atomic percentage values, each raw XPS signal is corrected by dividing the intensity by a relative sensitivity factor (RSF), and normalized over all of the elements detected. Since hydrogen is not detected, these atomic percentages exclude hydrogen.
XPS is widely used to generate an empirical formula because it readily yields excellent quantitative accuracy from homogeneous solid-state materials. Absolute quantification requires the use of certified (or independently verified) standard samples, and is generally more challenging, and less common. Relative quantification involves comparisons between several samples in a set for which one or more analytes are varied while all other components (the sample matrix) are held constant. Quantitative accuracy depends on several parameters such as: signal-to-noise ratio , peak intensity, accuracy of relative sensitivity factors, correction for electron transmission function, surface volume homogeneity, correction for energy dependence of electron mean free path, and degree of sample degradation due to analysis. Under optimal conditions, the quantitative accuracy of the atomic percent (at%) values calculated from the major XPS peaks is 90-95% for each peak. The quantitative accuracy for the weaker XPS signals, that have peak intensities 10-20% of the strongest signal, are 60-80% of the true value, and depend upon the amount of effort used to improve the signal-to-noise ratio (for example by signal averaging). Quantitative precision (the ability to repeat a measurement and obtain the same result) is an essential consideration for proper reporting of quantitative results.
Detection limits may vary greatly with the cross section of the core state of interest and the background signal level. In general, photoelectron cross sections increase with atomic number. The background increases with the atomic number of the matrix constituents as well as the binding energy, because of secondary emitted electrons. For example, in the case of gold on silicon where the high cross section Au4f peak is at a higher kinetic energy than the major silicon peaks, it sits on a very low background and detection limits of 1ppm or better may be achieved with reasonable acquisition times. Conversely for silicon on gold, where the modest cross section Si2p line sits on the large background below the Au4f lines, detection limits would be much worse for the same acquisition time. Detection limits are often quoted as 0.1–1.0 % atomic percent (0.1% = 1 part per thousand = 1000 ppm ) for practical analyses, but lower limits may be achieved in many circumstances.
Degradation depends on the sensitivity of the material to the wavelength of X-rays used, the total dose of the X-rays, the temperature of the surface and the level of the vacuum. Metals, alloys, ceramics and most glasses are not measurably degraded by either non-monochromatic or monochromatic X-rays. Some, but not all, polymers, catalysts, certain highly oxygenated compounds, various inorganic compounds and fine organics are. Non-monochromatic X-ray sources produce a significant amount of high energy Bremsstrahlung X-rays (1–15 keV of energy) which directly degrade the surface chemistry of various materials. Non-monochromatic X-ray sources also produce a significant amount of heat (100 to 200 °C) on the surface of the sample because the anode that produces the X-rays is typically only 1 to 5 cm (2 in) away from the sample. This level of heat, when combined with the Bremsstrahlung X-rays, acts to increase the amount and rate of degradation for certain materials. Monochromatised X-ray sources, because they are farther away (50–100 cm) from the sample, do not produce noticeable heat effects. In those, a quartz monochromator system diffracts the Bremsstrahlung X-rays out of the X-ray beam, which means the sample is only exposed to one narrow band of X-ray energy. For example, if aluminum K-alpha X-rays are used, the intrinsic energy band has a FWHM of 0.43 eV, centered on 1,486.7 eV ( E /Δ E = 3,457). If magnesium K-alpha X-rays are used, the intrinsic energy band has a FWHM of 0.36 eV, centered on 1,253.7 eV ( E /Δ E = 3,483). These are the intrinsic X-ray line widths; the range of energies to which the sample is exposed depends on the quality and optimization of the X-ray monochromator. Because the vacuum removes various gases (e.g., O 2 , CO) and liquids (e.g., water, alcohol, solvents, etc.) that were initially trapped within or on the surface of the sample, the chemistry and morphology of the surface will continue to change until the surface achieves a steady state. This type of degradation is sometimes difficult to detect.
Measured area depends on instrument design. The minimum analysis area ranges from 10 to 200 micrometres. Largest size for a monochromatic beam of X-rays is 1–5 mm. Non-monochromatic beams are 10–50 mm in diameter. Spectroscopic image resolution levels of 200 nm or below has been achieved on latest imaging XPS instruments using synchrotron radiation as X-ray source.
Instruments accept small (mm range) and large samples (cm range), e.g. wafers. The limiting factor is the design of the sample holder, the sample transfer, and the size of the vacuum chamber. Large samples are laterally moved in x and y direction to analyze
wider area. [ citation needed ]
Typically ranging 1–20 minutes for a broad survey scan that measures the amount of all detectable elements, typically 1–15 minutes for high resolution scan that reveal chemical state differences (for a high signal/noise ratio for count area result often requires multiple sweeps of the region of interest), 1–4 hours for a depth profile that measures 4–5 elements as a function of etched depth (this process time can vary the most as many factors will play a role). The time to complete a measurement is generally dependent on the brilliance of the X-ray source. [ 6 ]
XPS detects only electrons that have actually escaped from the sample into the vacuum of the instrument. In order to escape from the sample, a photoelectron must travel through the sample. Photo-emitted electrons can undergo inelastic collisions, recombination, excitation of the sample, recapture or trapping in various excited states within the material, all of which can reduce the number of escaping photoelectrons. These effects appear as an exponential attenuation function as the depth increases, making the signals detected from analytes at the surface much stronger than the signals detected from analytes deeper below the sample surface. Thus, the signal measured by XPS is an exponentially surface-weighted signal, and this fact can be used to estimate analyte depths in layered materials.
The ability to produce chemical state information, i.e. the local bonding environment of an atomic species in question from the topmost few nanometers of the sample makes XPS a unique and valuable tool for understanding the chemistry of the surface. The local bonding environment is affected by the formal oxidation state, the identity of its nearest-neighbor atoms, and its bonding hybridization to the nearest-neighbor or next-nearest-neighbor atoms. For example, while the nominal binding energy of the C 1 s electron is 284.6 eV, subtle but reproducible shifts in the actual binding energy, the so-called chemical shift (analogous to NMR spectroscopy ), provide the chemical state information. [ citation needed ]
Chemical-state analysis is widely used for carbon. It reveals the presence or absence of the chemical states of carbon, in approximate order of increasing binding energy, as: carbide (- C 2− ), silane (-Si- C H 3 ), methylene/methyl/hydrocarbon (- C H 2 - C H 2 -, C H 3 -CH 2 -, and - C H= C H-), amine (- C H 2 -NH 2 ), alcohol (- C -OH), ketone (- C =O), organic ester (- C OOR), carbonate (- C O 3 2− ), monofluoro-hydrocarbon (- C FH-CH 2 -), difluoro-hydrocarbon (- C F 2 -CH 2 -), and trifluorocarbon (-CH 2 - C F 3 ), to name but a few. [ citation needed ]
Chemical state analysis of the surface of a silicon wafer reveals chemical shifts due to different formal oxidation states, such as: n-doped silicon and p-doped silicon (metallic silicon), silicon suboxide (Si 2 O), silicon monoxide (SiO), Si 2 O 3 , and silicon dioxide (SiO 2 ). An example of this is seen in the figure "High-resolution spectrum of an oxidized silicon wafer in the energy range of the Si 2 p signal".
The main components of an XPS system are the source of X-rays, an ultra-high vacuum (UHV) chamber with mu-metal magnetic shielding, an electron collection lens, an electron energy analyzer, an electron detector system, a sample introduction chamber, sample mounts, a sample stage with the ability to heat or cool the sample, and a set of stage manipulators.
The most prevalent electron spectrometer for XPS is the hemispherical electron analyzer . They have high energy resolution and spatial selection of the emitted electrons. Sometimes, however, much simpler electron energy filters - the cylindrical mirror analyzers are used, most often for checking the elemental composition of the surface. They represent a trade-off between the need for high count rates and high angular/energy resolution. This type consists of two co-axial cylinders placed in front of the sample, the inner one being held at a positive potential, while the outer cylinder is held at a negative potential. Only the electrons with the right energy can pass through this setup and are detected at the end. The count rates are high but the resolution (both in energy and angle) is poor.
Electrons are detected using electron multipliers : a single channeltron for single energy detection, or arrays of channeltrons and microchannel plates for parallel acquisition. These devices consists of a glass channel with a resistive coating on the inside. A high voltage is applied between the front and the end. An incoming electron is accelerated to the wall, where it removes more electrons, in such a way that an electron avalanche is created, until a measurable current pulse is obtained. [ citation needed ]
In laboratory systems, either 10–30 mm beam diameter non-monochromatic Al K α or Mg K α anode radiation is used, or a focused 20-500 micrometer diameter beam single wavelength Al K α monochromatised radiation. Monochromatic Al K α X-rays are normally produced by diffracting and focusing a beam of non-monochromatic X-rays off of a thin disc of natural, crystalline quartz with a <1010> orientation . The resulting wavelength is 8.3386 angstroms (0.83386 nm) corresponding to a 1486.7 eV photon energy. Aluminum K α X-rays have an intrinsic full width at half maximum (FWHM) of 0.43 eV, centered at 1486.7 eV ( E /Δ E = 3457). [ citation needed ] For a well–optimized monochromator, the energy width of the monochromated aluminum K α X-rays is 0.16 eV, but energy broadening in common electron energy analyzers (spectrometers) produces an ultimate energy resolution on the order of FWHM=0.25 eV which is the ultimate energy resolution of most commercial systems. Under practical conditions, high energy-resolution settings produce peak widths (FWHM) between 0.4 and 0.6 eV for various elements and some compounds. For example, in a spectrum obtained for one minute at 20 eV pass energy using monochromated aluminum K α X-rays, the Ag 3 d 5/2 peak for a clean silver film or foil will typically have a FWHM of 0.45 eV. [ citation needed ] Non-monochromatic magnesium X-rays have a wavelength of 9.89 angstroms (0.989 nm) which corresponds to a photon energy of 1253 eV. The energy width of the non-monochromated X-ray is roughly 0.70 eV, which is the ultimate energy resolution of a system using non-monochromatic X-rays. [ citation needed ] Non-monochromatic X-ray sources do not use any crystal to diffract the X-rays allowing all primary X-rays lines and the full range of high-energy Bremsstrahlung X-rays (1–12 keV) to reach the surface. The ultimate energy resolution (FWHM) when using a non-monochromatic Mg K α source is 0.9–1.0 eV, which includes some contribution from spectrometer-induced broadening. [ citation needed ]
A breakthrough has been brought about in the last decades by the development of large scale synchrotron radiation facilities. Here, bunches of relativistic electrons kept in orbit inside a storage ring are accelerated through bending magnets or insertion devices like wigglers and undulators to produce a high brilliance and high flux photon beam. The beam is orders of magnitude more intense and better collimated than typically produced by anode-based sources. Synchrotron radiation is also tunable over a wide wavelength range, and can be made polarized in several distinct ways. This way, photon can be selected yielding optimum photoionization cross-sections for probing a particular core level. The high photon flux, in addition, makes it possible to perform XPS experiments also from low density atomic species, such as molecular and atomic adsorbates.
One of the synchrotron facilities that allows XPS measurement is Max IV synchrotron in Lund, Sweden. The Hippie beam line of this facility also allows to perform in operando Ambient Pressure X-Ray Photoelectron Spectroscopy (AP-XPS9. This latter technique allows to measure samples in ambient conditions, rather than in vacuum. [ 7 ]
The number of peaks produced by a single element varies from 1 to more than 20. Tables of binding energies that identify the shell and spin-orbit of each peak produced by a given element are included with modern XPS instruments, and can be found in various handbooks and websites. [ 8 ] [ 9 ] Because these experimentally determined energies are characteristic of specific elements, they can be directly used to identify experimentally measured peaks of a material with unknown elemental composition.
Before beginning the process of peak identification, the analyst must determine if the binding energies of the unprocessed survey spectrum (0-1400 eV) have or have not been shifted due to a positive or negative surface charge. This is most often done by looking for two peaks that are due to the presence of carbon and oxygen.
Charge referencing is needed when a sample suffers a charge induced shift of experimental binding energies to obtain meaningful binding energies from both wide-scan, high sensitivity (low energy resolution) survey spectra (0-1100 eV), and also narrow-scan, chemical state (high energy resolution) spectra. Charge induced shifting is normally due to a modest excess of low voltage (-1 to -20 eV) electrons attached to the surface, or a modest shortage of electrons (+1 to +15 eV) within the top 1-12 nm of the sample caused by the loss of photo-emitted electrons. If, by chance, the charging of the surface is excessively positive, then the spectrum might appear as a series of rolling hills, not sharp peaks as shown in the example spectrum.
Charge referencing is performed by adding a Charge Correction to each of the experimentally measured peaks. Since various hydrocarbon species appear on all air-exposed surfaces, the binding energy of the hydrocarbon C (1s) XPS peak is used for the charge correction of all energies obtained from non-conductive samples or conductors that have been deliberately insulated from the sample mount. The peak is normally found between 284.5 eV and 285.5 eV. The 284.8 eV binding energy is routinely used as the reference energy for charge referencing insulators, so that the charge correction is the difference between 284.8 eV and the experimentally measured C (1s) peak position.
Conductive materials and most native oxides of conductors should never need charge referencing. Conductive materials should never be charge referenced unless the topmost layer of the sample has a thick non-conductive film. The charging effect, if needed, can also be compensated by providing suitable low energy charges to the surface by the use of low-voltage (1-20 eV) electron beam from an electron flood gun, UV lights, low-voltage argon ion beam with low-voltage electron beam (1-10 eV), aperture masks, mesh screen with low-voltage electron beams, etc.
The process of peak-fitting high energy resolution XPS spectra is a mixture of scientific knowledge and experience. The process is affected by instrument design, instrument components, experimental settings and sample variables. Before starting any peak-fit effort, the analyst performing the peak-fit needs to know if the topmost 15 nm of the sample is expected to be a homogeneous material or is expected to be a mixture of materials. If the top 15 nm is a homogeneous material with only very minor amounts of adventitious carbon and adsorbed gases, then the analyst can use theoretical peak area ratios to enhance the peak-fitting process. Peak fitting results are affected by overall peak widths (at half maximum, FWHM), possible chemical shifts, peak shapes, instrument design factors and experimental settings, as well as sample properties:
When a photoemission event takes place, the following energy conservation rule holds:
where h ν {\displaystyle h\nu } is the photon energy, | E b v | {\displaystyle |E_{b}^{v}|} is the electron binding energy (with respect to the vacuum level) prior to ionization, and E k i n {\displaystyle E_{kin}} is the kinetic energy of the photoelectron. If reference is taken with respect to the Fermi level (as it is typically done in photoelectron spectroscopy ) | E b v | {\displaystyle |E_{b}^{v}|} must be replaced by the sum of the binding energy relative to the Fermi level, | E b F | {\displaystyle |E_{b}^{F}|} , and the sample work function, Φ 0 {\displaystyle \Phi _{0}} .
From the theoretical point of view, the photoemission process from a solid can be described with a semiclassical approach, where the electromagnetic field is still treated classically, while a quantum-mechanical description is used for matter.
The one—particle Hamiltonian for an electron subjected to an electromagnetic field is given by (in SI units ):
where ψ {\displaystyle \psi } is the electron wave function, e , m {\displaystyle e,m} are the electron charge and mass, A {\displaystyle \mathbf {A} } is the vector potential of the electromagnetic field, and V {\displaystyle V} is the unperturbed potential of the solid.
In the Coulomb gauge ( ∇ ⋅ A = 0 {\displaystyle \nabla \cdot \mathbf {A} =0} ), the vector potential commutes with the momentum operator
( [ p ^ , A ^ ] = 0 {\displaystyle [\mathbf {\hat {p}} ,\mathbf {\hat {A}} ]=0} ), so that the expression in brackets in the Hamiltonian simplifies to:
Actually, neglecting the ∇ ⋅ A {\displaystyle \nabla \cdot \mathbf {A} } term in the Hamiltonian, we are disregarding possible photocurrent contributions. [ 10 ] Such effects are generally negligible in the bulk, but may become important at the surface.
The quadratic term in A {\displaystyle \mathbf {A} } can be instead safely neglected, since its contribution in a typical photoemission experiment is about one order of magnitude smaller than that of the first term .
In first-order perturbation approach, the one-electron Hamiltonian can be split into two terms, an unperturbed Hamiltonian H ^ 0 {\displaystyle {\hat {H}}_{0}} , plus an interaction Hamiltonian H ^ ′ {\displaystyle {\hat {H}}'} , which describes the effects of the electromagnetic field:
In time-dependent perturbation theory, for an harmonic or constant perturbation, the transition rate between the initial state ψ i {\displaystyle \psi _{i}} and the final state ψ f {\displaystyle \psi _{f}} is expressed by Fermi's Golden Rule :
where E i {\displaystyle E_{i}} and E f {\displaystyle E_{f}} are the eigenvalues of the unperturbed Hamiltonian in the initial and final state, respectively, and h ν {\displaystyle h\nu } is the photon energy. Fermi's Golden Rule uses the approximation that the perturbation acts on the system for an infinite time. This approximation is valid when the time that the perturbation acts on the system is much larger than the time needed for the transition. It should be understood that this equation needs to be integrated with the density of states ρ ( E ) {\displaystyle \rho (E)} which gives: [ 11 ]
In a real photoemission experiment the ground state core electron binding energy cannot be directly probed, because the measured one incorporates both initial state and final state effects, and the spectral linewidth is broadened owing to the finite core-hole lifetime ( τ {\displaystyle \tau } ).
Assuming an exponential decay probability for the core hole in the time domain ( ∝ exp − t / τ {\displaystyle \propto \exp {-t/\tau }} ), the spectral function will have a Lorentzian shape, with a FWHM (Full Width at Half Maximum) Γ {\displaystyle \Gamma } given by:
From the theory of Fourier transforms, Γ {\displaystyle \Gamma } and τ {\displaystyle \tau } are linked by the indeterminacy relation:
Γ τ ≥ ℏ {\displaystyle \Gamma \tau \geq \hbar }
The photoemission event leaves the atom in a highly excited core ionized state, from which it can decay radiatively (fluorescence) or non-radiatively (typically by Auger decay).
Besides Lorentzian broadening, photoemission spectra are also affected by a Gaussian broadening, whose contribution can be expressed by
The Gaussian linewidth σ {\displaystyle \sigma } of the spectra depends on the experimental energy resolution, vibrational and inhomogeneous broadening.
The first effect is caused by the non perfect monochromaticity of the photon beam, resulting in a finite bandwidth, and by the limited resolving power of the analyzer. The vibrational component is produced by the excitation of low energy vibrational modes both in the initial and in the final state. Finally, inhomogeneous broadening can originate from the presence of unresolved core level components in the spectrum.
In a solid, inelastic scattering events also contribute to the photoemission process, generating electron-hole pairs which show up as a tail on the high energy side of the main photoemission peak. Due to scattering, the electron intensity can be written in the Beer–Lambert form
where λ {\displaystyle \lambda } is the electronic inelastic mean free path ( IMFP ). Here, z {\displaystyle z} is the distance to the sample surface. The IMFP generally depends rather weakly on material, but strongly on the photoelectron kinetic energy E kin {\displaystyle E_{\text{kin}}} . Quantitatively we can fit the IMFP by [ 12 ] [ 13 ] [ 14 ]
where a {\displaystyle a} is the thickness of one monolayer, as given by the number density ρ {\displaystyle \rho } as a = ρ − 1 / 3 {\displaystyle a=\rho ^{-1/3}} . The above formula is a fit to a compilation of experimental data for pure elements. For anorganic and organic compounds, its numerical factors are different, see the paper by Seah and Dench (1979).
In some cases, energy loss features due to plasmon excitations are also observed. This can either be a final state effect caused by core hole decay, which generates quantized electron wave excitations in the solid ( intrinsic plasmons ), or it can be due to excitations induced by photoelectrons travelling from the emitter to the surface ( extrinsic plasmons ).
Due to the reduced coordination number of first-layer atoms, the plasma frequency of bulk and surface atoms are related by the following equation:
so that surface and bulk plasmons can be easily distinguished from each other.
Plasmon states in a solid are typically localized at the surface, and can strongly affect IMFP.
Temperature-dependent atomic lattice vibrations, or phonons , can broaden the core level components and attenuate the interference patterns in an X-ray photoelectron diffraction ( XPD ) experiment. The simplest way to account for vibrational effects is by multiplying the scattered single-photoelectron wave function ϕ j {\displaystyle \phi _{j}} by the Debye–Waller factor :
where Δ k j 2 {\displaystyle \Delta k_{j}^{2}} is the squared magnitude of the wave vector variation caused by scattering,
and U j 2 ¯ {\displaystyle {\bar {U_{j}^{2}}}} is the temperature-dependent one-dimensional vibrational mean squared displacement of the j t h {\displaystyle j^{th}} emitter. In the Debye model, the mean squared displacement is calculated in terms of the Debye temperature, Θ D {\displaystyle \Theta _{D}} , as:
|
https://en.wikipedia.org/wiki/X-ray_photoelectron_spectroscopy
|
X-ray scattering techniques are a family of analytical techniques which reveal information about the crystal structure , chemical composition, and physical properties of materials and thin films. These techniques are based on observing the scattered intensity of an X-ray beam hitting a sample as a function of incident and scattered angle, polarization, and wavelength or energy.
Note that X-ray diffraction is sometimes considered a sub-set of X-ray scattering, where the scattering is elastic and the scattering object is crystalline, so that the resulting pattern contains sharp spots analyzed by X-ray crystallography (as in the Figure). However, both scattering and diffraction are related general phenomena and the distinction has not always existed. Thus Guinier 's classic text [ 1 ] from 1963 is titled "X-ray diffraction in Crystals, Imperfect Crystals and Amorphous Bodies" so 'diffraction' was clearly not restricted to crystals at that time.
In IXS the energy and angle of inelastically scattered X-rays are monitored, giving the dynamic structure factor S ( q , ω ) {\displaystyle S(\mathbf {q} ,\omega )} . From this many properties of materials can be obtained, the specific property depending on the scale of the energy transfer. The table below, listing techniques, is adapted from. [ 2 ] Inelastically scattered X-rays have intermediate phases and so in principle are not useful for X-ray crystallography . In practice X-rays with small energy transfers are included with the diffraction spots due to elastic scattering, and X-rays with large energy transfers contribute to the background noise in the diffraction pattern.
|
https://en.wikipedia.org/wiki/X-ray_scattering_techniques
|
The X-ray standing wave (XSW) technique can be used to study the structure of surfaces and interfaces with high spatial resolution and chemical selectivity. Pioneered by B.W. Batterman in the 1960s, [ 1 ] the availability of synchrotron light has stimulated the application of this interferometric technique to a wide range of problems in surface science. [ 2 ] [ 3 ]
An X-ray standing wave (XSW) field is created by interference between an X-ray beam impinging on a sample and a reflected beam. The reflection may be generated at the Bragg condition for a crystal lattice or an engineered multilayer superlattice ; in these cases, the period of the XSW equals the periodicity of the reflecting planes. X-ray reflectivity from a mirror surface at small incidence angles may also be used to generate long-period XSWs. [ 4 ]
The spatial modulation of the XSW field, described by the dynamical theory of X-ray diffraction , undergoes a pronounced change when the sample is scanned through the Bragg condition. Due to a relative phase variation between the incoming and reflected beams, the nodal planes of the XSW field shift by half the XSW period. [ 5 ] Depending on the position of the atoms within this wave field, the measured element-specific absorption of X-rays varies in a characteristic way. Therefore, measurement of the absorption (via X-ray fluorescence or photoelectron yield) can reveal the position of the atoms relative to the reflecting planes. The absorbing atoms can be thought of as "detecting" the phase of the XSW; thus, this method overcomes the phase problem of X-ray crystallography.
For quantitative analysis, the normalized fluorescence or photoelectron yield Y p {\displaystyle Y_{p}} is described by [ 2 ] [ 3 ]
Y p ( Ω ) = 1 + R + 2 C R f H cos ( ν − 2 π P H ) {\displaystyle Y_{p}(\Omega )=1+R+2C{\sqrt {R}}f_{H}\cos(\nu -2\pi P_{H})} ,
where R {\displaystyle R} is the reflectivity and ν {\displaystyle \nu } is the relative phase of the interfering beams. The characteristic shape of Y p {\displaystyle Y_{p}} can be used to derive precise structural information about the surface atoms because the two parameters f H {\displaystyle f_{H}} (coherent fraction) and P H {\displaystyle P_{H}} (coherent position) are directly related to the Fourier representation of the atomic distribution function. Therefore, with a sufficiently large number of Fourier components being measured, XSW data can be used to establish the distribution of the different atoms in the unit cell (XSW imaging). [ 6 ]
XSW measurements of single crystal surfaces are performed on a diffractometer . The crystal is rocked through a Bragg diffraction condition, and the reflectivity and XSW yield are simultaneously measured. XSW yield is usually detected as X-ray fluorescence (XRF). XRF detection enables in situ measurements of interfaces between a surface and gas or liquid environments, since hard X-rays can penetrate these media. While XRF gives an element-specific XSW yield, it is not sensitive to the chemical state of the absorbing atom. Chemical state sensitivity is achieved using photoelectron detection, which requires ultra-high vacuum instrumentation.
Measurements of atomic positions at or near single crystal surfaces require substrates of very high crystal quality. The intrinsic width of a Bragg reflection, as calculated by dynamical diffraction theory, is extremely small (on the order of 0.001° under conventional X-ray diffraction conditions). Crystal defects such as mosaicity can substantially broaden the measured reflectivity, which obscures the modulations in the XSW yield needed to locate the absorbing atom. For defect-rich substrates such as metal single crystals, a normal-incidence or back-reflection geometry is used. In this geometry, the intrinsic width of the Bragg reflection is maximized. Instead of rocking the crystal in space, the energy of the incident beam is tuned through the Bragg condition. Since this geometry requires soft incident X-rays, this geometry typically uses XPS detection of the XSW yield.
Applications which require ultra-high vacuum conditions:
Applications which do not require ultra-high vacuum conditions:
Zegenhagen, Jörg; Kazimirov, Alexander (2013). The X-Ray Standing Wave Technique . World Scientific . doi : 10.1142/6666 . ISBN 978-981-2779-00-7 .
|
https://en.wikipedia.org/wiki/X-ray_standing_waves
|
X-ray emission occurs from many celestial objects. These emissions can have a pattern , occur intermittently, or as a transient astronomical event . In X-ray astronomy many sources have been discovered by placing an X-ray detector above the Earth 's atmosphere. Often, the first X-ray source discovered in many constellations is an X-ray transient . These objects show changing levels of X-ray emission. NRL astronomer Dr. Joseph Lazio stated: [ 1 ] " ... the sky is known to be full of transient objects emitting at X- and gamma-ray wavelengths, ...". There are a growing number of recurrent X-ray transients. In the sense of traveling as a transient, the only stellar X-ray source that does not belong to a constellation is the Sun . As seen from Earth, the Sun moves from west to east along the ecliptic , passing over the course of one year through the twelve constellations of the Zodiac , and Ophiuchus .
SCP 06F6 is (or was) an astronomical object of unknown type, discovered on February 21, 2006, in the constellation Boötes [ 2 ] during a survey of galaxy cluster CL 1432.5+3332.8 with the Hubble Space Telescope 's Advanced Camera for Surveys Wide Field Channel. [ 3 ]
The European X-ray satellite XMM Newton made an observation in early August 2006 which appears to show an X-ray glow around SCP 06F6 , [ 4 ] two orders of magnitude more luminous than that of supernovae. [ 5 ]
Most astronomical X-ray transient sources have simple and consistent time structures; typically a rapid brightening followed by gradual fading, as in a nova or supernova .
GRO J0422+32 [ 6 ] is an X-ray nova and black hole candidate that was discovered by the BATSE instrument on the Compton Gamma Ray Observatory satellite on Aug 5 1992. [ 7 ] [ 8 ] During the outburst, it was observed to be stronger than the Crab Nebula gamma-ray source out to photon energies of about 500 keV . [ 9 ]
XTE J1650-500 is a transient binary X-ray source located in the constellation Ara . The binary period is 0.32 d. [ 10 ]
" Soft X-ray transients " are composed of some type of compact object (probably a neutron star) and some type of "normal", low-mass star (i.e. a star with a mass of some fraction of the Sun's mass). These objects show changing levels of low-energy, or "soft", X-ray emission, probably produced somehow by variable transfer of mass from the normal star to the compact object. In effect the compact object "gobbles up" the normal star, and the X-ray emission can provide the best view of how this process occurs. [ 11 ]
Soft X-ray transients Cen X-4 and Apl X-1 were discovered by Hakucho , Japan 's first X-ray astronomy satellite .
X-ray bursters are one class of X-ray binary stars exhibiting periodic and rapid increases in luminosity (typically a factor of 10 or greater) peaked in the X-ray regime of the electromagnetic spectrum . These astrophysical systems are composed of an accreting compact object , typically a neutron star or occasionally a black hole , and a companion 'donor' star; the mass of the donor star is used to categorize the system as either a high mass (above 10 solar masses ) or low mass (less than 1 solar mass) X-ray binary, abbreviated as LMXB and HMXB, respectively. X-ray bursters differ observationally from other X-ray transient sources (such as X-ray pulsars and soft X-ray transients ), showing a sharp rise time (1 – 10 seconds) followed by spectral softening (a property of cooling black bodies ). Individual bursts are characterized by an integrated flux of 10 39-40 ergs. [ 12 ]
A gamma-ray burst (GRB) is a highly luminous flash of gamma rays — the most energetic form of electromagnetic radiation . GRB 970228 was a GRB detected on Feb 28 1997 at 02:58 UTC . Prior to this event, GRBs had only been observed at gamma wavelengths. For several years physicists had expected these bursts to be followed by a longer-lived afterglow at longer wavelengths, such as radio waves , x-rays , and even visible light . This was the first burst for which such an afterglow was observed. [ 13 ]
A transient x-ray source was detected which faded with a power law slope in the days following the burst. This x-ray afterglow was the first GRB afterglow ever detected. [ 14 ]
For some types of X-ray pulsars , the companion star is a Be star that rotates very rapidly and apparently sheds a disk of gas around its equator. The orbits of the neutron star with these companions are usually large and very elliptical in shape. When the neutron star passes nearby or through the Be circumstellar disk, it will capture material and temporarily become an X-ray pulsar. The circumstellar disk around the Be star expands and contracts for unknown reasons, so these are transient X-ray pulsars that are observed only intermittently, often with months to years between episodes of observable X-ray pulsation.
SAX J1808.4-3658 is a transient, accreting millisecond X-ray pulsar that is intermittent. In addition, X-ray burst oscillations and quasi-periodic oscillations in addition to coherent X-ray pulsations have been seen from SAX J1808.4-3658, making it a Rosetta stone for interpretation of the timing behavior of low-mass X-ray binaries .
There are a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (~ tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). [ 15 ] XTE J1739–302 is one of these. Discovered in 1997, remaining active only one day, with an X-ray spectrum well fitted with a thermal bremsstrahlung (temperature of ~20 keV), resembling the spectral properties of accreting pulsars, it was at first classified as a peculiar Be/X-ray transient with an unusually short outburst. [ 16 ] A new burst was observed on Apr 8 2008 with Swift . [ 16 ]
The quiet Sun , although less active than active regions, is awash with dynamic processes and transient events (bright points, nanoflares and jets). [ 17 ]
A coronal mass ejection (CME) is an ejected plasma consisting primarily of electrons and protons (in addition to small quantities of heavier elements such as helium, oxygen, and iron), plus the entraining coronal closed magnetic field regions. Small-scale energetic signatures such as plasma heating (observed as compact soft X-ray brightening) may be indicative of impending CMEs. The soft X-ray sigmoid (an S-shaped intensity of soft X-rays) is an observational manifestation of the connection between coronal structure and CME production. [ 18 ]
The first detection of a Coronal mass ejection (CME) as such was made on Dec 1 1971 by R. Tousey of the US Naval Research Laboratory using the 7th Orbiting Solar Observatory ( OSO 7 ). [ 19 ] Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing.
The largest geomagnetic perturbation, resulting presumably from a "prehistoric" CME, coincided with the first-observed solar flare , in 1859. The flare was observed visually by Richard Christopher Carrington and the geomagnetic storm was observed with the recording magnetograph at Kew Gardens . The same instrument recorded a crotchet , an instantaneous perturbation of the Earth's ionosphere by ionizing soft X-rays . This could not easily be understood at the time because it predated the discovery of X-rays (by Roentgen ) and the recognition of the ionosphere (by Kennelly and Heaviside ).
Unlike Earth's aurorae, which are transient and only occur at times of heightened solar activity, Jupiter 's aurorae are permanent, though their intensity varies from day to day. They consist of three main components: the main ovals, which are bright, narrow (< 1000 km in width) circular features located at approximately 16° from the magnetic poles; [ 20 ] the satellite auroral spots, which correspond to the footprints of the magnetic field lines connecting their ionospheres with the ionosphere of Jupiter, and transient polar emissions situated within the main ovals. [ 20 ] [ 21 ] The auroral emissions were detected in almost all parts of the electromagnetic spectrum from radio waves to X-rays (up to 3 keV).
The X-ray monitor of Solwind , designated NRL-608 or XMON, was a collaboration between the Naval Research Laboratory and Los Alamos National Laboratory . The monitor consisted of 2 collimated argon proportional counters. The instrument bandwidth of 3-10 keV was defined by the detector window absorption (the window was 0.254 mm beryllium) and the upper level discriminator. The active gas volume (P-10 mixture) was 2.54 cm deep, providing good efficiency up to 10 keV. Counts were recorded in 2 energy channels. Slat collimators defined a FOV of 3° x 30° (FWHM) for each detector; the long axes of the FOVs were perpendicular to each other. The long axes were inclined 45 degrees to the scan direction, allowing localization of transient events to about 1 degree.
The PHEBUS experiment recorded high energy transient events in the range 100 keV to 100 MeV. It consisted of two independent detectors and their associated electronics . Each detector consisted of a bismuth germinate (BGO) crystal 78 mm in diameter by 120 mm thick, surrounded by a plastic anti-coincidence jacket. The two detectors were arranged on the spacecraft so as to observe 4 π steradians . The burst mode was triggered when the count rate in the 0.1 to 1.5 MeV energy range exceeded the background level by 8 σ (standard deviations) in either 0.25 or 1.0 seconds. There were 116 channels over the energy range. [ 22 ]
Also on board the Granat International Astrophysical Observatory were four WATCH instruments that could localize bright sources in the 6 to 180 keV range to within 0.5° using a Rotation Modulation Collimator. Taken together, the instruments' three fields of view covered approximately 75% of the sky. The energy resolution was 30% FWHM at 60 keV. During quiet periods, count rates in two energy bands (6 to 15 and 15 to 180 keV) were accumulated for 4, 8, or 16 seconds, depending on onboard computer memory availability. During a burst or transient event, count rates were accumulated with a time resolution of 1 s per 36 s. [ 22 ]
The Compton Gamma Ray Observatory (CGRO) carries the Burst and Transient Source Experiment (BATSE) which detects in the 20 keV to 8 MeV range.
WIND was launched on Nov 1 1994. At first, the satellite had a lunar swingby orbit around the Earth. With the assistance of the Moon's gravitational field Wind's apogee was kept over the day hemisphere of the Earth and magnetospheric observations were made. Later in the mission, the Wind spacecraft was inserted into a special "halo" orbit in the solar wind upstream from the Earth, about the sunward Sun-Earth equilibrium point (L1). The satellite has a spin period of ~ 20 seconds, with the spin axis normal to the ecliptic. WIND carries the Transient Gamma-Ray Spectrometer (TGRS) which covers the energy range 15 keV - 10 MeV, with an energy resolution of 2.0 keV @ 1.0 MeV (E/delta E = 500).
The third US Small Astronomy Satellite (SAS-3) was launched on May 7, 1975, with 3 major scientific objectives: 1) determine bright X-ray source locations to an accuracy of 15 arcseconds; 2) study selected sources over the energy range 0.1-55 keV; and 3) continuously search the sky for X-ray novae, flares, and other transient phenomena. It was a spinning satellite with pointing capability. SAS 3 was the first to discover X-rays from a highly magnetic WD binary system, AM Her, discovered X-rays from Algol and HZ 43, and surveyed the soft X-ray background (0.1-0.28 kev).
Tenma was the second Japanese X-ray astronomy satellite launched on Feb 20 1983. Tenma carried GSFC detectors which had an improved energy resolution (by a factor of 2) compared to proportional counters and performed the first sensitive measurements of the iron spectral region for many astronomical objects. Energy Range: 0.1 keV - 60 keV. Gas Scintillator Proportional Counter: 10 units of 80 cm 2 each, FOV ~ 3deg (FWHM), 2 - 60 keV. Transient Source Monitor: 2 - 10 keV.
India 's first dedicated astronomy satellite , scheduled for launch on board the PSLV in mid 2010, [ 23 ] Astrosat will monitor the X-ray sky for new transients, among other scientific focuses.
|
https://en.wikipedia.org/wiki/X-ray_transient
|
X/Open group (also known as the Open Group for Unix Systems [ 1 ] [ 2 ] and incorporated in 1987 as X/Open Company, Ltd. [ 3 ] [ 4 ] ) was a consortium founded by several European UNIX systems manufacturers in 1984 [ 3 ] [ 5 ] to identify and promote open standards in the field of information technology . More specifically, the original aim was to define a single specification for operating systems derived from UNIX, to increase the interoperability of applications and reduce the cost of porting software. Its original members were Bull , ICL , Siemens , Olivetti , and Nixdorf —a group sometimes referred to as BISON . [ 6 ] Philips and Ericsson joined in 1985, [ 6 ] at which point the name X/Open was adopted.
The group published its specifications as X/Open Portability Guide , starting with Issue 1 in 1985, and later as X/Open CAE Specification .
In 1987, X/Open was incorporated as X/Open Company, Ltd. [ 3 ] [ 4 ]
By March 1988, X/Open grew to 13 members: AT&T , Digital , Hewlett-Packard , Sun Microsystems , Unisys , NCR , Olivetti, Bull, Ericsson, Nixdorf, Philips, ICL, and Siemens. [ 7 ]
By 1990 the group had expanded to 21 members: [ 8 ] in addition to the original five, Philips and Nokia from Europe; AT&T, Digital, Unisys, Hewlett-Packard, IBM , NCR, Sun, Prime Computer , Apollo Computer from North America; Fujitsu , Hitachi , and NEC from Japan; plus the Open Software Foundation and Unix International .
In October 1993, a planned transfer of UNIX trademark from Novell to X/Open was announced; [ 9 ] it was finalized in 2nd quarter of 1994. [ 10 ]
In 1994, X/Open published the Single UNIX Specification , which was drawn from XPG4 Base and other sources. [ 11 ]
In 1996, X/Open merged with the Open Software Foundation to form The Open Group . [ 5 ] [ 3 ]
X/Open was also responsible for the XA protocol for heterogeneous distributed transaction processing, which was released in 1991. [ 12 ]
X/Open published its specifications under the name X/Open Portability Guide (or XPG). Based on the AT&T System V Interface Definition , [ 13 ] the guide has a wider scope than POSIX , which is only concerned with direct operating system interfaces. The guide specifies a Common Application Environment (CAE) intended to allow portability of applications across operating systems. The primary aim was compatibility between different vendors' implementations of UNIX , though some vendors also implemented the standards on non-UNIX platforms.
Issue 1 of the guide covered basic operating system interfaces, the C language, COBOL, indexed sequential file access method (ISAM) and other parts [ 14 ] and was published in 1985. [ 15 ] Issue 2 followed in 1987, [ 15 ] and extended the coverage to include Internationalization, Terminal Interfaces, Inter-Process Communication, and the programming languages C , COBOL , FORTRAN , and Pascal , as well as data access interfaces for SQL and ISAM. [ 16 ] In many cases these were profiles of existing international standards. Issue 3 (XPG3) followed in 1989, [ 15 ] its primary focus being convergence with the POSIX operating system specifications; it added Window Manager, ADA Language and more. [ 17 ] Issue 4 (XPG4) was published in July 1992. The Single UNIX Specification was based on the XPG4 standard. The XPG3 and XPG4 standards define all aspects of the operating system, programming languages and protocols which compliant systems should have.
Multiple levels of compliance and corresponding labels were available, depending on the scope of the guide that was covered: Base and Plus; labels Component and Application are for SW components and applications that make use of the portability guide. [ 18 ]
Issue 1 was published as a single publication with multiple parts, ISBN 0-444-87839-4 .
Issue 2 was published in multiple volumes:
Issue 3 was published in multiple volumes:
The XPG4 Base specification includes the following documents:
The above three documents were published not under the label X/Open Portability Guide but rather as CAE Specification . [ 15 ] Nonetheless, the term X/Open Portability Guide, Issue 4 sees some use in reference to 1992 year of publication. [ 19 ] [ 20 ]
Further X/Open publications under the label X/Open CAE Specification rather than X/Open Portability Guide :
|
https://en.wikipedia.org/wiki/X/Open
|
X10 is a protocol for communication among electronic devices used for home automation ( domotics ). It primarily uses power line wiring for signaling and control, where the signals involve brief radio frequency bursts representing digital information. A wireless radio -based protocol transport is also defined.
X10 was developed in 1975 by Pico Electronics of Glenrothes, Scotland , in order to allow remote control of home devices and appliances. It was the first general purpose home automation network technology and remains the most widely available [ citation needed ] . [ 1 ]
Although a number of higher- bandwidth alternatives exist, X10 remains popular in the home environment with millions of units in use worldwide, and inexpensive availability of new components.
In 1970, a group of engineers started a company in Glenrothes, Scotland called Pico Electronics. [ 2 ] The company developed the first single chip calculator . [ 1 ] When calculator integrated circuit prices started to fall, Pico refocused on commercial products rather than plain ICs.
In 1974, the Pico engineers jointly developed an LP record turntable, the ADC Accutrac 4000, with Birmingham Sound Reproducers , at the time the largest manufacturer of record changers in the world. It could be programmed to play selected tracks, and could be operated by a remote control using ultrasound signals, which sparked the idea of remote control for lights and appliances. By 1975, the X10 project was conceived, so named because it was the tenth project. In 1978, X10 products started to appear in RadioShack and Sears stores. Together with BSR a partnership was formed, with the name X10 Ltd. At that time the system consisted of a 16 channel command console, a lamp module, and an appliance module. Soon after came the wall switch module and the first X10 timer.
In the 1980s, the CP-290 computer interface was released. Software for the interface runs on the Commodore 64 , Apple II , classic Mac OS , MS-DOS , and Microsoft Windows .
In 1985, BSR went out of business, and X10 (USA) Inc. was formed. In the early 1990s, the consumer market was divided into two main categories, the ultra-high-end with a budget at US$ 100,000 and the mass market with budgets at US$2,000 to US$35,000. CEBus (1984) and LonWorks (1991) were attempts to improve reliability and replace X10.
X10 components have been sold under a variety of brand names:
Many of these vendors have since exited this business.
Household electrical wiring which powers lights and appliances is used to send digital data between X10 devices. This data is encoded onto a 120 kHz carrier which is transmitted as bursts during the relatively quiet zero crossings of the 50 or 60 Hz AC alternating current waveform . One bit is transmitted at each zero crossing. [ 3 ]
The digital data consists of an address and a command sent from a controller to a controlled device. More advanced controllers can also query equally advanced devices to respond with their status. This status may be as simple as "off" or "on", or the current dimming level, or even the temperature or other sensor reading. Devices usually plug into the wall where a lamp, television , or other household appliance plugs in; however some built-in controllers are also available for wall switches and ceiling fixtures.
The relatively high-frequency carrier wave carrying the signal cannot pass through a power transformer or across the phases of a multiphase system . For split phase systems, the signal can be passively coupled from leg-to-leg using a passive capacitor , but for three phase systems or where the capacitor provides insufficient coupling , an active X10 repeater can be used. To allow signals to be coupled across phases and still match each phase's zero crossing point, each bit is transmitted three times in each half cycle, offset by 1/6 cycle.
It may also be desirable to block X10 signals from leaving the local area so, for example, the X10 controls in one house do not interfere with the X10 controls in a neighboring house. In this situation, inductive filters can be used to attenuate the X10 signals coming into or going out of the local area.
Whether using power line or radio communications, packets transmitted using the X10 control protocol consist of a four bit house code followed by one or more four bit unit codes , finally followed by a four bit command. For the convenience of users configuring a system, the four bit house code is selected as a letter from A through P while the four bit unit code is a number 1 through 16.
When the system is installed, each controlled device is configured to respond to one of the 256 possible addresses (16 house codes × 16 unit codes); each device reacts to commands specifically addressed to it, or possibly to several broadcast commands.
The protocol may transmit a message that says "select code A3", followed by "turn on", which commands unit "A3" to turn on its device. Several units can be addressed before giving the command, allowing a command to affect several units simultaneously. For example, "select A3", "select A15", "select A4", and finally, "turn on", causes units A3, A4, and A15 to all turn on.
Note that there is no restriction that prevents using more than one house code within a single house. The "all lights on" command and "all units off" commands will only affect a single house code, so an installation using multiple house codes effectively has the devices divided into separate zones.
Inexpensive X10 devices only receive commands and do not acknowledge their status to the rest of the network. Two-way controller devices allow for a more robust network but cost two to four times more and require two-way X10 devices. [ 4 ]
Note that the binary values for the house and unit codes correspond, but they are not a straight binary sequence. A unit code is followed by one additional "0" bit to distinguish from a command code (detailed above).
In the 60 Hz AC current flow, each bit transmitted requires two zero crossings. A "1" bit is represented by an active zero crossing followed by an inactive zero crossing. A "0" bit is represented by an inactive zero crossing followed by an active zero crossing. An active zero crossing is represented by a 1 millisecond burst of 120 kHz at the zero crossing point (nominally 0°, but within 200 microseconds of the zero crossing point). An inactive zero crossing will not have a pulse of 120 kHz signal.
In order to provide a predictable start point, every data frame transmitted always begins with a start code of three active zero crossings followed by an inactive crossing. Since all data bits are sent as one active and one inactive (or one inactive and one active) zero crossing, the start code, possessing three active crossings in a row, can be uniquely detected. Many X10 protocol charts represent this start code as "1110", but it is important to realize that is in terms of zero crossings, not data bits.
Immediately after the start code, a 4-bit house code (normally represented by the letters A to P on interface units) appears, and after the house code comes a 5-bit function code . Function codes may specify a unit number code (1–16) or a command code. The unit number or command code occupies the first 4 of the 5 bits. The final bit is a 0 for a unit code and a 1 for a command code. Multiple unit codes may be transmitted in sequence before a command code is finally sent. The command will be applied to all unit codes sent. It is also possible to send a message with no unit codes, just a house code and a command code. This will apply to the command to the last group of units codes previously sent.
One start code, one house code, and one function code is known as an X10 frame and represent the minimum components of a valid X10 data packet.
Each frame is sent twice in succession to make sure the receivers understand it over any power line noise for purposes of redundancy, reliability, and to accommodate line repeaters. After allowing for retransmission, line control, etc., data rates are around 20 bit/s , making X10 data transmission so slow that the technology is confined to turning devices on and off or other very simple operations.
Whenever the data changes from one address to another address, from an address to a command, or from one command to another command, the data frames must be separated by at least 6 clear zero crossings (or "000000"). The sequence of six zeros resets the device decoder hardware.
Later developments (1997) of hardware are improvements of the native X10 hardware. In Europe (2001) for the 230 VAC 50 Hz market. All improved products use the same X10 protocol and are compatible.
To allow for wireless keypads, remote switches, motion sensors, et cetera, an RF protocol is also defined. X10 wireless devices send data packets that are nearly identical to the NEC IR protocol used by many IR remotes, and a radio receiver then provides a bridge which translates these radio packets to ordinary X10 power line control packets. The wireless protocol operates at a frequency of 310 MHz in the U.S. and 433.92 MHz in European systems.
The devices available using the radio protocol include:
Depending on the load that is to be controlled, different modules must be used. For incandescent lamp loads, a lamp module or wall switch module can be used. These modules switch the power using a TRIAC solid state switch and are also capable of dimming the lamp load. Lamp modules are almost silent in operation, and generally rated to control loads ranging from approximately 60 to 500 watts .
For loads other than incandescent lamps, such as fluorescent lamps , high-intensity discharge lamps , and electrical home appliances , the triac-based electronic switching in the lamp module is unsuitable and an appliance module must be used instead. These modules switch the power using an impulse relay . In the U.S., these modules are generally rated to control loads up to 15 amperes (1800 watts at 120 V).
Many device modules offer a feature called local control . If the module is switched off, operating the power switch on the lamp or appliance will cause the module to turn on. In this way, a lamp can still be lit or a coffee pot turned on without the need to use an X10 controller. Wall switch modules may not offer this feature. As a result, older Appliance modules may fail to work with, for example, a very low load such as a 5W LED table lamp.
Some wall switch modules offer a feature called local dimming . Ordinarily, the local push button of a wall switch module simply offers on/off control with no possibility of locally dimming the controlled lamp. If local dimming is offered, holding down the push button will cause the lamp to cycle through its brightness range.
Higher end modules have more advanced features such as programmable on levels, customizable fade rates, the ability to transmit commands when used (referred to as 2-way devices), and scene support.
There are sensor modules that sense and report temperature, light, infrared, motion, or contact openings and closures. Device modules include thermostats, audible alarms and controllers for low voltage switches.
X10 controllers range from extremely simple to very sophisticated.
The simplest controllers are arranged to control four X10 devices at four sequential addresses (1–4 or 5–8). The controllers typically contain the following buttons:
More sophisticated controllers can control more units and/or incorporate timers that perform preprogrammed functions at specific times each day. Units are also available that use passive infrared motion detectors or photocells to turn lights on and off based on external conditions.
Finally, there are very sophisticated units are available that can be fully programmed and/or controlled with a piece of software called Active Home, such as the CM11A serial interface. These systems can execute many different timed events, respond to external sensors, and execute, with the press of a single button, an entire scene , turning lights on, establishing brightness levels, and so on. Control programs are available for computers running Microsoft Windows , Apple's Macintosh , Linux and FreeBSD operating systems.
Burglar alarm systems are also available. These systems contain door/window sensors, as well as motion sensors that use a coded radio frequency (RF) signal to identify when they are tripped or just to routinely check-in and give a heart-beat signal to show that the system is still active. Users can arm and disarm their system via several different remote controls that also use a coded RF signal to ensure security. When an alarm is triggered the console will make an outbound telephone call with a recorded message. The console will also use X10 protocols to flash lights when an alarm has been triggered while the security console sounds an external siren. Using X10 protocols, signals will also be sent to remote sirens for additional security.
There are bridges to translate X10 to other home automation standards (e.g., KNX ). ioBridge can be used to translate the X10 protocol to a web service API via the X10 PSC04 Powerline Interface Module. The magDomus home controller from magnocomp allows interconnection and inter-operation between most home automation technologies.
Thermostat
With X10 being an open standard, companies such as RCS released an x10 controllable thermostat model TX15-B, which is controllable via a web interface or a computer running a X10 software such as HAL or HomeSeer.
Solid-state switches used in X10 controls pass a very small leakage current. Compact fluorescent lamps may display nuisance blinking when switched off; CFL manufacturers recommend against controlling lamps with solid-state timers or remote controls.
Some X10 controllers with triac solid-state outputs may not work well with low power devices (below 50 watts) or devices like fluorescent bulbs due to the leakage current of the device. An appliance module, using a relay with metallic contacts may resolve this problem. Many older appliance units have a 'local control' feature whereby the relay is intentionally bypassed with a high value resistor; the module can then sense the appliance's own switch and turn on the relay when the local switch is operated. This sense current may be incompatible with LED or CFL lamps.
Not all devices can be used on a dimmer. Fluorescent lamps are not dimmable with incandescent lamp dimmers; certain models of compact fluorescent lamps are dimmable but cost more. Motorized appliances such as fans, etc. generally will not operate as expected on a dimmer.
One problem with X10 is excessive attenuation of signals between the two live conductors in the 3-wire 120/240 volt system used in typical North American residential construction. Signals from a transmitter on one live conductor may not propagate through the high impedance of the distribution transformer winding to the other live conductor. Often, there's simply no reliable path to allow the X10 signals to propagate from one transformer leg wire to the other; this failure may come and go as large 240 volt devices such as stoves or dryers are turned on and off. (When turned on, such devices provide a low-impedance bridge for the X10 signals between the two leg wires.) This problem can be permanently overcome by installing a capacitor between the leg wires as a path for the X10 signals; manufacturers commonly sell signal couplers that plug into 240 volt sockets that perform this function. More sophisticated installations install an active repeater device between the legs, while others combine signal amplifiers with a coupling device. A repeater is also needed for inter-phase communication in homes with three-phase electric power . In many countries outside North America, entire houses are typically wired from a single 240 volt single-phase wire, so this problem does not occur.
Television receivers or household wireless devices may cause spurious "off" or "on" signals. Noise filtering (as installed on computers as well as many modern appliances) may help keep external noise out of X10 signals, but noise filters not designed for X10 may also attenuate X10 signals traveling on the branch circuit to which the appliance is connected.
Certain types of power supplies used in modern electronic equipment, such as computers, television receivers and satellite receivers, attenuate passing X10 signals by providing a low impedance path to high frequency signals. Typically, the capacitors used on the inputs to these power supplies short the X10 signal from line to neutral, suppressing any hope of X10 control on the circuit near that device. Filters are available that will block the X10 signals from ever reaching such devices; plugging offending devices into such filters can cure mysterious X10 intermittent failures.
Having a backup power supply or standby power supply such as used with computers or other electronic devices can totally kill that leg in a household installation because of the filtering used in the power supply.
X10 signals can only be transmitted one command at a time, first by addressing the device to control, and then sending an operation for that device to perform. If two X10 signals are transmitted at the same time they may collide or interleave, leading to commands that either cannot be decoded or that trigger incorrect operations. The CM15A and RR501 Transceiver can avoid these signal collisions that can sometimes occur with other models.
The X10 protocol is slow. It takes roughly three quarters of a second to transmit a device address and a command. While generally not noticeable when using a tabletop controller, it becomes a noticeable problem when using 2-way switches or when utilizing some sort of computerized controller. The apparent delay can be lessened somewhat by using slower device dim rates. With more advanced modules another option is to use group control (lighting scene) extended commands. These allow adjusting several modules at once by a single command.
X10 protocol does support more advanced control over the dimming speed, direct dim level setting and group control (scene settings). This is done via extended message set, which is an official part of X10 standard. However support for all extended messages is not mandatory, and many cheaper modules implement only the basic message set. These require adjusting each lighting circuit one after the other, which can be visually unappealing and also very slow.
The standard X10 power line and RF protocols lack support for encryption, and can only address 256 devices. Unfiltered power line signals from close neighbors using the same X10 device addresses may interfere with each other. Interfering RF wireless signals may similarly be received, with it being easy for anyone nearby with an X10 RF remote to wittingly or unwittingly cause mayhem if an RF to power line device is being used on a premises.
|
https://en.wikipedia.org/wiki/X10_(industry_standard)
|
The X:A ratio is the ratio between the number of X chromosomes and the number of sets of autosomes in an organism . This ratio is used primarily for determining the sex of some species, such as drosophila flies and the C. elegans nematode. [ 1 ] The first use of this ratio for sex determination is ascribed to Victor M. Nigon. [ 1 ]
Generally, a 1:1 ratio results in a female and a 1:2 ratio results in a male. When calculating the ratio, Y chromosomes are ignored. For example, for a diploid drosophila that has XX, the ratio is 1:1 (2 Xs to 2 sets of autosomes, since it is a diploid). For a diploid drosophila that has XY, the ratio is 1:2 (1 X to 2 sets of autosomes, since it is diploid). [ 2 ] Drosophilla sex chromosome ratio determines the factors it encodes which enhances the synthesis of sxl protein which in turn activates the female specific pathway.
This genetics article is a stub . You can help Wikipedia by expanding it .
This article related to members of the fly family Drosophilidae is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/X:A_ratio
|
XAM , or the eXtensible Access Method , is a standard for computer data storage developed by IBM and EMC and maintained by the Storage Networking Industry Association (SNIA). [ 1 ] [ 2 ] It was ratified as an ANSI standard by early 2011. [ 3 ] [ 4 ] XAM is an API for fixed content aware storage devices. XAM replaces the various proprietary interfaces that have been used for this purpose in the past. Content generating applications now have a standard means of saving and finding their content across a broad array of storage devices.
XAM is similar in function to a file-system API such as the POSIX file and directory operations, in that it allows applications to store and retrieve their data. XAM stores application data in XSet objects that also contain metadata .
|
https://en.wikipedia.org/wiki/XAM
|
xAP is an open protocol used for home automation and supports integration of telemetry and control devices primarily within the home. Common communications networks include RS-232 , RS-485 , Ethernet and wireless . xAP protocol always uses broadcast for sending the messages. All the receivers listens to the message and introspects the message header to verify whether the message is of its interest. xAP protocol has the following key advantages.
|
https://en.wikipedia.org/wiki/XAP_Home_Automation_protocol
|
XCMS Online is a cloud version of the original eXtensible Computational Mass Spectrometry (XCMS) [ 1 ] [ 2 ] [ 3 ] technology (a bioinformatics software designed for statistical analysis of mass spectrometry data), created by the Siuzdak Lab at Scripps Research . XCMS introduced the concept of nonlinear retention time alignment that allowed for the statistical assessment of the detected peaks across LCMS and GCMS datasets. [ 1 ] XCMS Online [ 4 ] was designed to facilitate XCMS analyses through a cloud portal and as a more straightforward [ 5 ] (non command driven) way to analyze, visualize and share untargeted metabolomic data. [ 4 ] Further to this, the combination of XCMS and METLIN [ 6 ] [ 7 ] [ 8 ] allows for the identification of known molecules using METLIN's tandem mass spectrometry data, and enables the identification of unknown (uncharacterized molecules) via similarity searching of tandem mass spectrometry data. [ 9 ] [ 8 ] [ 10 ] XCMS Online has also become a systems biology tool for integrating different omic data sets. [ 11 ] As of January 2021, [ 12 ] the XCMSOnline - METLIN platform has over 44,000 registered users. XCMS - METLIN was recognized in 2023 as the year's top analytical innovation. [ 10 ]
XCMS Online works by comparing groups of raw or preprocessed metabolomic data to discover metabolites using methods such as nonlinear retention time alignment and feature detection & matching. Once analysis is complete the data can be viewed several different ways including via bubble plots, heat maps, chromatograms, and box plots. In addition, XCMS Online is integrated with METLIN , a large metabolite database. [ 1 ] [ 13 ] The following file formats are supported for direct upload to the site. [ 14 ]
In 2005, the Siuzdak Lab created an open-source tool named XCMS [ 1 ] in the programming language R . Noticing the need for a more accessible, graphical data processing tool they created the cloud-based XCMS Online in 2012. [ 4 ] [ 15 ] The ability for users to stream data directly from instruments while being acquired was added in 2014. [ 16 ] Also in that year a commercial version named XCMS Plus (owned by Mass Consortium Corporation) was released and, in 2015, SCIEX became a reseller. [ 17 ] In 2017 it was shown that XCMS Online could be used in a systems biology workflow. [ 18 ] One year later, in the absence of a publicly available alternative, a version of XCMS Online was released with the ability to perform multiple reaction monitoring (MRM). [ 19 ]
|
https://en.wikipedia.org/wiki/XCMS_Online
|
xDNA (also known as expanded DNA or benzo-homologated DNA ) is a size-expanded nucleotide system synthesized from the fusion of a benzene ring and one of the four natural bases: adenine , guanine , cytosine , and thymine . [ 1 ] This size expansion produces an 8 letter alphabet which has a larger information storage capacity than natural DNA's (often referred to as B-DNA in literature) 4 letter alphabet. [ 2 ] As with normal base-pairing , A pairs with xT, C pairs with xG, G pairs with xC, and T pairs with xA. The double helix is thus 2.4 Å wider than a natural double helix. [ 3 ] [ 4 ] While similar in structure to B-DNA, xDNA has unique absorption, fluorescence, and stacking properties. [ 5 ] [ 6 ] [ 7 ]
Initially synthesized as an enzyme probe by Nelson J. Leonard's group, benzo-homologated adenine was the first base synthesized. Later, Eric T. Kool 's group finished synthesizing the remaining three expanded bases , eventually followed by yDNA ("wide" DNA), another benzo-homologated nucleotide system, and naphtho -homologated xxDNA and yyDNA. xDNA is more stable when compared to regular DNA when subjected to higher temperature, and while entire strands of xDNA, yDNA, xxDNA and yyDNA exist, they are currently difficult to synthesize and maintain. Experiments with xDNA provide new insight into the behavior of natural B-DNA. The extended bases xA, xC, xG, and xT are naturally fluorescent , and single strands composed of only extended bases can recognize and bind to single strands of natural DNA, making them useful tools for studying biological systems. [ 3 ] [ 8 ] xDNA is most commonly formed with base pairs between a natural and expanded nucleobase , however x-nucleobases can also be paired together. [ 5 ] Current research supports xDNA as a viable genetic encoding system in the near future. [ 4 ]
The first nucleotide to be expanded was the purine adenine . Nelson J. Leonard and colleagues synthesized this original x-nucleotide, which was referred to as "expanded adenine". xA was used as a probe in the investigation of active sites of ATP -dependent enzymes , more specifically what modifications the substrate could take while still being functional. [ 8 ] [ 9 ] Almost two decades later, the other three bases were successfully expanded and later integrated into a double helix by Eric T. Kool and colleagues. Their goal was to create a synthetic genetic system which mimics and surpasses the functions of the natural genetic system, [ 10 ] and to broaden the applications of DNA both in living cells and in experimental biochemistry . Once the expanded base set was created, the goal shifted to identifying or developing faithful replication enzymes and further optimizing the expanded DNA alphabet. [ 8 ]
In benzo-homologated purines (xA and xG), the benzene ring is bound to the nitrogenous base through nitrogen-carbon (N-C) bonds. Benzo-homologated pyrimidines are formed through carbon-carbon (C-C) bonds between the base and the benzene. [ 3 ] Thus far, x-nucleobases have been added to strands of DNA using phosphoramidite derivatives, as traditional polymerases have been unsuccessful in synthesizing strands of xDNA. X-nucleotides are poor candidates as substrates for B-DNA polymerases as their size interferes with binding at the catalytic domain . Attempts at using template-independent enzymes have been successful as they have a reduced geometric constraint for substrates. Terminal deoxynucleotidyl transferase (TdT) has been used previously to synthesize strands of bases which have been bound to fluorophores . Using TdT , up to 30 monomers can be combined to form a double-helix of xDNA, however this oligomeric xDNA appears to inhibit its own extension beyond this length due to the overwhelming hydrogen bonding. In order to minimize inhibition, xDNA can be hybridized into a regular helix. [ 7 ] [ 11 ]
For xDNA to be used as a substitute structure for information storage, it requires a reliable replication mechanism. Research into xDNA replication using a Klenow fragment from DNA polymerase I shows that a natural base partner is selectively added in instances of single-nucleotide insertion. However, DNA polymerase IV (Dpo4) has been able to successfully use xDNA for these types of insertions with high fidelity, making it a promising candidate for future research in extending replicates of xDNA. [ 4 ] xDNA's mismatch sensitivity is similar to that of B-DNA . [ 2 ]
Similar to natural bases, x-nucleotides selectively assemble into a duplex-structure resembling B-DNA. [ 4 ] xDNA was originally synthesized by incorporating a benzene ring into the nitrogenous base. However, other expanded bases have been able to incorporate thiophene and benzo[b]thiophene as well. xDNA and yDNA use benzene rings to widen the bases and are thus termed "benzo-homologated". Another form of expanded nucleobases known as yyDNA incorporate naphthalene into the base and are "naptho-homologated". xDNA has a rise of 3.2 Å and a twist of 32°, significantly smaller than B-DNA, which has a rise of 3.3 Å and a twist of 34.2° [ 3 ] xDNA nucleotides can occur on both strands—either alone (known as "doubly expanded DNA" [ 8 ] ) or mixed with natural bases—or exclusively on one strand or the other. Similar to B-DNA, xDNA can recognize and bind complementary single-stranded DNA or RNA sequences. [ 2 ]
Duplexes formed from xDNA are similar to natural duplexes aside from the distance between the two sugar-phosphate backbones. xDNA helices have a greater number of base pairs per turn of the helix as a result of a reduced distance between neighbour nucleotides. NMR spectra report that xDNA helices are anti-parallel, right-handed and take an anti conformation around the glycosidic bond , with a C2'-endo sugar pucker. [ 5 ] [ 11 ] Helices created from xDNA are more likely to take a B-helix over an A-helix conformation, [ 2 ] and have an increased major groove width by 6.5 Å (where the backbones are farthest apart) and decreased minor groove width by 5.5 Å (where the backbones are closest together) compared to B-DNA . Altering groove width affects the xDNA's ability to associate with DNA-binding proteins , [ 12 ] but as long as the expanded nucleotides are exclusive to one strand, recognition sites are sufficiently similar to B-DNA to allow bonding of transcription factors and small polyamide molecules. Mixed helices present the possibility of recognizing the four expanded bases using other DNA-binding molecules. [ 11 ]
Expanded nucleotides and their oligomeric helices share many properties with their natural B-DNA counterparts, including their pairing preference: A with T , C with G . [ 11 ] The various differences in chemical properties between xDNA and B-DNA support the hypothesis that the benzene ring which expands x-nucleobases is not, in fact, chemically inert. [ 5 ] xDNA is more hydrophobic than B-DNA , [ 7 ] and also has a smaller HOMO-LUMO gap (distance between the highest occupied molecular orbital and lowest unoccupied molecular orbital) as a result of modified saturation . [ 3 ] xDNA has higher melting temperatures than B-DNA (a mixed decamer of xA and T has a melting temperature of 55.6 °C, 34.3 °C higher than the same decamer of A and T [ 11 ] ), and exhibits an "all-or-nothing" melting behaviour. [ 2 ]
Under lab conditions, xDNA orients itself in the syn conformation . This unfortunately does not expose the binding face of the xDNA nucleotides to face the neighbouring strand for binding, meaning that extra measures must be applied to alter the conformation of xDNA before attempting to form helices. However, the anti and syn orientations are practically identical energetically in expanded bases. [ 9 ] This conformational preference is seen primarily in pyrimidines , and purines display minimal preference for orientation. [ 5 ]
Stacking of the nucleotides in a double helix is a major determinant of the helix's stability. With the added surface area and hydrogen available for bonding, stacking potential for the nucleobases increases with the addition of a benzene spacer. By increasing the separation between the nitrogenous bases and either sugar-phosphate backbone, the helix's stacking energy is less variable and therefore more stable. The energies for natural nucleobase pairs vary from 18 to 52 kJ/mol. This variance is only 14–40 kJ/mol for xDNA. [ 8 ]
Due to an increased overlap between and expanded strand of DNA and its neighbouring strand, there are greater interstrand interactions in expanded and mixed helices, resulting in a significant increase in the helix's stability. xDNA has enhanced stacking abilities resultant from changes in inter- and intrastrand hydrogen bonding that arise from the addition of a benzene spacer, but expanding the bases does not alter hydrogen's contribution to the stability of the duplex. These stacking abilities are exploited by helices consisting of both xDNA and B-DNA in order to optimize the strength of the helix. Increased stacking is seen most prominently in strands consisting only of A and xA and T and xT, as T -xA has stronger stacking interactions than T - A . [ 3 ]
The energy resultant from pyrimidines ranges from 30 to 49 kJ/mol. The range for purines is between 40-58kJ/mol. By replacing one nucleotide in a double-helix with an expanded nucleotide, the strength of the stacking interactions increases by 50%. Expanding both nucleotides results in a 90% increase in stacking strength. While xG has an overall negative effect on the binding strength of the helix, the other three expanded bases outweigh this with their positive effects. The change in energy caused by expanding the bases is mostly dependent on the rotation of the bond about the nucleobases' centers of mass , and center of mass stacking interactions improve the stacking potential of the helix. [ 5 ] Because the size-expanded bases widen the helix, it is more thermally stable with a higher melting temperature. [ 7 ]
The addition of a benzene spacer in x- nucleobases affects the bases' optical absorption spectra. Time-dependent density functional theory (TDDFT) applied to xDNA revealed that the benzene component of the highest occupied molecular orbitals ( HOMO ) in the x-bases pins the absorption onset at an earlier point than natural bases . Another unusual feature of xDNA absorption spectra is the red-shifted excimers of xA in the low range. In terms of stacking fingerprints, there is a more pronounced hypochromicity seen in consecutive xA- T base pairs .
Implications of xDNA's altered absorption include applications in nanoelectronic technology and nanobiotechnology . The reduced spacing between x-nucleotides makes the helix stiffer, thus it is not as easily affected by substrate , electrode , and functional nanoparticle forces. Other alterations to natural nucleotides resulting in different absorption spectra will broaden these applications in the future. [ 6 ]
One unique property of xDNA is its inherent fluorescence . Natural bases can be bound directly to fluorophores for use in microarrays , in situ hybridization , and polymorphism analysis. However, these fluorescent natural bases often fail as a result of self-quenching , which diminishes their fluorescent intensity and reduces their applicability as visual DNA tags. The pi interactions between the rings in x-nucleobases result in an inherent fluorescence in the violet-blue range, with a Stokes shift between 50 and 80 nm. They also have a quantum yield in the range of 0.3–0.6. xC has the greatest fluorescent emission. [ 10 ] [ 7 ]
After the creation of and successful research surrounding xDNA, more forms of expanded nucleotides were investigated. yDNA is a second, similar system of nucleotides which uses a benzene ring to expand the four natural bases . xxDNA and yyDNA use naphthalene , a polycyclic molecule consisting of two hydrocarbon rings. The two rings expand the base even wider, further altering its chemical properties.
The success and implications of xDNA prompted research to examine other factors which could alter B-DNA 's chemical properties and create a new system for information storage with broader applications. yDNA also uses a benzene ring , similar to xDNA, with the only difference being the site of addition of the aromatic ring . The location of the benzene ring changes the preferred structure of the expanded helix. The altered conformation makes yDNA more similar to B-DNA in its orientation by changing the interstrand hydrogen bonds . Stability is highly dependent on the bases' rotation about the link between the base and the sugar of the backbone. yDNA's altered preference for this orientation makes it more stable overall than xDNA. The location of the benzene spacer also affects the bases' groove geometry, altering neighbour interactions. The base pairs between y-nucleotides and natural nucleotides is planar, rather than slightly twisted as with xDNA. This decreases the rise of the helix even further than achieved by xDNA.
While xDNA and yDNA are quite similar in most properties, including their increased stacking interactions, yDNA shows superior mismatch recognition. y-pyrimidines display slightly stronger stacking interactions than x-pyrimidines as a result of the distance between the two anomeric carbons, which is slightly larger in yDNA. xDNA still has stronger stacking interactions in model helices, but adding either x- or y-pyrimidines to a natural double helix strengthens the intra- and interstrand interactions, increasing overall helix stability. In the end, which of the two has the strongest overall stacking interactions is dependent on the sequence ; xT and yT bind A with similar strength, but the stacking energy of yC bound to G is stronger than xC by 4kJ/mol. yDNA and other expanded bases are part of a very young field which is highly understudied. Research suggest that the ideal conformation still remains to be discovered, but knowing that the benzene location affects the orientation and structure of expanded nucleobases adds information to their future design. [ 8 ]
Doubly-expanded (or naphtho-homologated ) nucleobases incorporate a naphthalene spacer instead of a benzene ring , widening the base twice as much with its two-ringed structure. These structures (known as xxDNA and yyDNA) are 4.8 Å wider than natural bases and were once again created as a result of Leonard's research on expanded adenine in ATP -dependent enzymes in 1984. No literature was published on these doubly-expanded bases for nearly three decades until 2013 when the first xxG was produced by Sharma, Lait, and Wetmore and incorporated along with xxA into a natural helix . Although very little research has been performed on xxDNA, xx- purine neighbours have already been shown to increase intrastrand stacking energy by up to 119% (as opposed to 62% in x-purines). xx- purine and pyrimidine interactions show an overall decrease in stacking energies, but the overall stability of duplexes including pyrimidines and xx-purines increases by 22%, more than twofold that of pyrimidines and x-purines. [ 9 ]
xDNA has many applications in chemical and biological research, including expanding upon applications of natural DNA , such as scaffolding. In order to create self-assembling nanostructures, a scaffold is needed as a sort of trellis to support the growth. DNA has been used as a means to this end in the past, but expanded scaffolds make larger scaffolds for more complex self-assembly an option. [ 1 ] xDNA's electrical conduction properties also make it a prime candidate as a molecular wire , as its π-π interactions help it efficiently conduct electricity. [ 3 ] Its 8-letter alphabet ( A , T , C , G , xA, xT, xC, xG) gives it the potential to store 2 n times more states per sequence than DNA, where n is the number of bases in the sequence. For example, combining 6 nucleotides of with B-DNA yields 4096 possible sequences, whereas a combination of the same number of nucleotides created with xDNA yields 262,144 possible sequences. Additionally, xDNA can be used as a fluorescent probe at enzyme active sites , as was its original application by Leonard et al. [ 2 ]
xDNA has also been applied to the study of protein-DNA interactions . Due to xDNA's natural fluorescing properties, it can easily be visualized in both lab and living conditions. [ 5 ] xDNA is becoming more easy to create and oligomerize , and its high-affinity binding to complementary DNA and RNA sequences means that it can not only help locate these sequences floating around in the cell, but also when they are already interacting with other structures within the cell. [ 10 ] xDNA also has potential applications in assays that employ TdT as it may improve reporters, and can be used as an affinity tag for interstrand bonding. [ 7 ]
|
https://en.wikipedia.org/wiki/XDNA
|
XDrawChem is a free software program for drawing chemical structural formulas , available for Unix and macOS . It is distributed under the GNU GPL . In Microsoft Windows this program is called WinDrawChem. [ 1 ]
|
https://en.wikipedia.org/wiki/XDrawChem
|
XEUS ( X-ray Evolving Universe Spectroscopy ) was a space observatory plan developed by the European Space Agency (ESA) as a successor to the successful XMM-Newton X-ray satellite telescope . It was merged to the International X-ray Observatory (IXO) around 2008, but as that project ran into issues in 2011, the ESA component was forked off into Advanced Telescope for High Energy Astrophysics (Athena).
XEUS consisted of a mirror spacecraft that carried a large X-ray telescope, with a mirror area of about 5 m² and an imaging resolution better than 5 arcsec; for X-ray radiation with an energy of 1 keV . A detector spacecraft would have flown in formation with the telescope at a distance of approximately 50 m, in the focus of the telescope. The detectors would have included a wide-field X-ray imager with an energy resolution of 150 eV at 6 keV, as well as a cryogenic narrow-field imager with an energy resolution of 2 eV at 1 keV.
XEUS could have measured the X-ray spectrum and thereby the composition, temperature and velocities of hot matter in the early universe. It would address diverse questions like the origin and nature of black holes, their relation with star formation, baryons evolution, and the formation of the heavy elements in the Universe.
The technology required for the follow-on project of XEUS, the International X-ray Observatory , eventually leading to Advanced Telescope for High Energy Astrophysics ( Athena ) which is currently under development. XEUS was one of the candidates for the Cosmic Vision programme of the European Space Agency. [ 1 ]
In May 2008, ESA and NASA established a coordination group involving three agencies - ESA , NASA and JAXA - with the intent of exploring a joint mission merging the ongoing XEUS and Constellation-X (Con-X) projects. [ 2 ] This proposed the start of a joint study for the International X-ray Observatory (IXO). [ 3 ]
|
https://en.wikipedia.org/wiki/XEUS
|
The XGC88000 crawler crane is a class of extremely large ultraheavy crawler crane made by XCMG . With a lifting capacity of 3,600 [ 5 ] to 4,000 tons, [ 6 ] a total boom length of 144 meters [ 3 ] and a total gross weight of 5,350 tons. [ 3 ] The XGC88000 crawler crane became the largest tracked mobile crane in the world, [ 7 ] [ 8 ] [ 9 ] beating out the previous record holder, the Liebherr LR 13000 when it officially came into production in 2013. However, when it comes to absolute size, movability, and strength, the title still goes to the Honghai Crane which runs on rails.
It is also one of the largest ground vehicles in current operation, and - by its official production in 2013 - became the largest self-propelled ground vehicle by gross size, beating out the NASA crawler-transporters .
The XGC88000 crawler crane, unlike the majority of crawler cranes, comes in two sections. The primary section consists of the crane itself, which boasts a maximum boom length of 144 meters, a maximum total length of 173 meters (including the counterweight radius), a maximum height (when fully erect) of 108 meters, a lifting capacity ranging between 3,600 and 4,000 tons [ 10 ] [ 11 ] [ 12 ] (although it managed to lift a maximum overload of 4,500 tons [ 13 ] ), and a maximum lifting momentum of 88,000 ton-meter. [ 14 ]
The vehicle itself is powered by three 641 kW (860 hp) U.S. Cummins engine units outputting a total power of 1,923 kW (2,579 hp). [ 1 ] Each power unit could act as a mobile hydraulic power working station. [ 1 ] Moreover, they can also work as an additional power source during the crane assembly/disassembly process to improve assembly efficiency, as well as act as an additional spare unit for each other. [ 1 ] The crane driver sits in a large spacious cabin the size of a large office room. [ 15 ] The cabin has an airconditioner, a seat, and a small sofa to accommodate three additional passengers. [ 15 ]
The second section is a separate tracked compartment which essentially holds the crane's counterweight in total. The counterweight has a total height of 9.7 meters, a total length of 29 meters and a weight of 2,900 tons. [ 16 ] The compartment houses its own driver's cockpit that can be independently driven and requires a working radius of 29 meters. [ 17 ] The maximum weight of the vehicle clocks in at nearly 5,400 tons in weight. [ 17 ]
|
https://en.wikipedia.org/wiki/XGC88000_crawler_crane
|
XJB-5-131 is a synthetic antioxidant . In a mouse model of Huntington's disease , it has been shown to reduce oxidative damage to mitochondrial DNA , and to maintain mitochondrial DNA copy number. [ 1 ] XJB-5-131 also strongly protects against ferroptosis , a form of iron-dependent regulated cell death. [ 2 ]
|
https://en.wikipedia.org/wiki/XJB-5-131
|
XL-413 is a drug which acts as a selective inhibitor of the enzyme cell division cycle 7-related protein kinase (CDC7). It is being researched for the treatment of some forms of cancer , and also has applications in genetic engineering . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
This pharmacology -related article is a stub . You can help Wikipedia by expanding it .
This genetics article is a stub . You can help Wikipedia by expanding it .
This article about biological engineering is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/XL-413
|
XMD is a classical molecular dynamics software designed to
simulate problems related to materials science . The code was
developed by Jon Rifkin of University of Connecticut and is being
distributed under GNU General Public License .
Source code is available in C and can be compiled using POSIX thread
functions to take advantage of multi-CPU computers.
This article about molecular modelling software is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/XMD
|
XMDF ( ever-eXtending Mobile Document Format ) is a file format for viewing electronic books . It was originally developed by Sharp Corporation for its Zaurus platform. It is primarily used in Japan . [ 1 ]
This software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/XMDF_(E-book_format)
|
The Extended Metadata Registry (XMDR) is a project proposing and testing a set of extensions to the ISO/IEC 11179 metadata registry specifications that deal with the development of improved standards and technology for storing and retrieving the semantics of data elements , terminologies , and concept structures in metadata registries .
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/XMDR
|
XMLTV is an XML based file format for describing TV listings , which has been introduced in 2002. IPTV providers use XMLTV as the base reference template in their systems, and extend it internally according to their business needs. [ 1 ] [ 2 ] [ 3 ]
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/XMLTV
|
XMM-Newton , also known as the High Throughput X-ray Spectroscopy Mission and the X-ray Multi-Mirror Mission , is an X-ray space observatory launched by the European Space Agency in December 1999 on an Ariane 5 rocket. It is the second cornerstone mission of ESA's Horizon 2000 programme. Named after physicist and astronomer Sir Isaac Newton , the spacecraft is tasked with investigating interstellar X-ray sources, performing narrow- and broad-range spectroscopy , and performing the first simultaneous imaging of objects in both X-ray and optical ( visible and ultraviolet ) wavelengths. [ 7 ]
Initially funded for two years, with a ten-year design life, the spacecraft remains in good health and has received repeated mission extensions, most recently in March 2023 and is scheduled to operate until the end of 2026. [ 5 ] ESA plans to succeed XMM-Newton with the Advanced Telescope for High Energy Astrophysics (ATHENA), the second large mission in the Cosmic Vision 2015–2025 plan, to be launched in 2035. [ 8 ] XMM-Newton is similar to NASA 's Chandra X-ray Observatory , also launched in 1999.
As of May 2018, close to 5,600 papers have been published about either XMM-Newton or the scientific results it has returned. [ 9 ]
The observational scope of XMM-Newton includes the detection of X-ray emissions from astronomical objects, detailed studies of star-forming regions, investigation of the formation and evolution of galaxy clusters , the environment of supermassive black holes and mapping of the mysterious dark matter . [ 10 ]
In 1982, even before the launch of XMM-Newton 's predecessor EXOSAT in 1983, a proposal was generated for a "multi-mirror" X-ray telescope mission. [ 11 ] [ 12 ] The XMM mission was formally proposed to the ESA Science Programme Committee in 1984 and gained approval from the Agency's Council of Ministers in January 1985. [ 13 ] That same year, several working groups were established to determine the feasibility of such a mission, [ 11 ] and mission objectives were presented at a workshop in Denmark in June 1985. [ 12 ] [ 14 ] At this workshop, it was proposed that the spacecraft contain 12 low-energy and 7 high-energy X-ray telescopes. [ 14 ] [ 15 ] The spacecraft's overall configuration was developed by February 1987, and drew heavily from lessons learned during the EXOSAT mission; [ 11 ] the Telescope Working Group had reduced the number of X-ray telescopes to seven standardised units. [ 14 ] [ 15 ] In June 1988 the European Space Agency approved the mission and issued a call for investigation proposals (an "announcement of opportunity"). [ 11 ] [ 15 ] Improvements in technology further reduced the number of X-ray telescopes needed to just three. [ 15 ]
In June 1989, the mission's instruments had been selected and work began on spacecraft hardware. [ 11 ] [ 15 ] A project team was formed in January 1993 and based at the European Space Research and Technology Centre (ESTEC) in Noordwijk , Netherlands. [ 13 ] Prime contractor Dornier Satellitensysteme (a subsidiary of the former DaimlerChrysler Aerospace ) was chosen in October 1994 after the mission was approved into the implementation phase, with development and construction beginning in March 1996 and March 1997, respectively. [ 13 ] [ 14 ] The XMM Survey Science Centre was established at the University of Leicester in 1995. [ 11 ] [ 16 ] The three flight mirror modules for the X-ray telescopes were delivered by Italian subcontractor Media Lario in December 1998, [ 14 ] and spacecraft integration and testing was completed in September 1999. [ 13 ]
XMM left the ESTEC integration facility on 9 September 1999, taken by road to Katwijk then by the barge Emeli to Rotterdam . On 12 September, the spacecraft left Rotterdam for French Guiana aboard Arianespace 's transport ship MN Toucan . [ 17 ] The Toucan docked at the French Guianese town of Kourou on 23 September, and was transported to Guiana Space Centre 's Ariane 5 Final Assembly Building for final launch preparation. [ 18 ]
Launch of XMM took place on 10 December 1999 at 14:32 UTC from the Guiana Space Centre. [ 19 ] XMM was lofted into space aboard an Ariane 5 rocket, and placed into a highly elliptical, 40-degree orbit that had a perigee of 838 km (521 mi) and an apogee of 112,473 km (69,887 mi). [ 2 ] Forty minutes after being released from the Ariane upper stage, telemetry confirmed to ground stations that the spacecraft's solar arrays had successfully deployed. Engineers waited an additional 22 hours before commanding the on-board propulsion systems to fire a total of five times, which, between 10 and 16 December, changed the orbit to 7,365 × 113,774 km (4,576 × 70,696 mi) with a 38.9-degree inclination. This resulted in the spacecraft making one complete revolution of the Earth approximately every 48 hours. [ 2 ] [ 20 ]
Immediately after launch, XMM began its Launch and Early Orbit phase of operations. [ 21 ] On 17 and 18 December 1999, the X-ray modules and Optical Monitor doors were opened, respectively. [ 22 ] Instrument activation started on 4 January 2000, [ 2 ] and the Instrument Commissioning phase began on 16 January. [ 23 ] The Optical Monitor (OM) attained first light on 5 January, the two European Photon Imaging Camera (EPIC) MOS - CCDs followed on 16 January and the EPIC pn -CCD on 22 January, and the Reflection Grating Spectrometers (RGS) saw first light on 2 February. [ 23 ] On 3 March, the Calibration and Performance Validation phase began, [ 2 ] and routine science operations began on 1 June. [ 23 ]
During a press conference on 9 February 2000, ESA presented the first images taken by XMM and announced that a new name had been chosen for the spacecraft. Whereas the program had formally been known as the High Throughput X-ray Spectroscopy Mission, the new name would reflect the nature of the program and the originator of the field of spectroscopy. Explaining the new name of XMM-Newton , Roger Bonnet, ESA's former Director of Science, said, "We have chosen this name because Sir Isaac Newton was the man who invented spectroscopy and XMM is a spectroscopy mission." He noted that because Newton is synonymous with gravity and one of the goals of the satellite was to locate large numbers of black hole candidates, "there was no better choice than XMM-Newton for the name of this mission." [ 24 ]
Including all construction, spacecraft launch, and two years of operation, the project was accomplished within a budget of €689 million (1999 conditions). [ 13 ] [ 14 ]
The spacecraft has the ability to lower the operating temperature of both the EPIC and RGS cameras, a function that was included to counteract the deleterious effects of ionising radiation on the camera pixels . In general, the instruments are cooled to reduce the amount of dark current within the devices. During the night of 3–4 November 2002, RGS-2 was cooled from its initial temperature of −80 °C (−112 °F) down to −113 °C (−171 °F), and a few hours later to −115 °C (−175 °F). After analysing the results, it was determined the optimal temperature for both RGS units would be −110 °C (−166 °F), and during 13–14 November, both RGS-1 and RGS-2 were set to this level. During 6–7 November, the EPIC MOS-CCD detectors were cooled from their initial operating temperature of −100 °C (−148 °F) to a new setting of −120 °C (−184 °F). After these adjustments, both the EPIC and RGS cameras showed dramatic improvements in quality. [ 25 ]
On 18 October 2008, XMM-Newton suffered an unexpected communications failure, during which time there was no contact with the spacecraft. While some concern was expressed that the vehicle may have suffered a catastrophic event, photographs taken by amateur astronomers at the Starkenburg Observatory in Germany and at other locations worldwide showed that the spacecraft was intact and appeared on course. A weak signal was finally detected using a 35-metre (115 ft) antenna in New Norcia, Western Australia , and communication with XMM-Newton suggested that the spacecraft's Radio Frequency switch had failed. After troubleshooting a solution, ground controllers used NASA 's 34 m (112 ft) antenna at the Goldstone Deep Space Communications Complex to send a command that changed the switch to its last working position. ESA stated in a press release that on 22 October, a ground station at the European Space Astronomy Centre (ESAC) made contact with the satellite, confirming the process had worked and that the satellite was back under control. [ 26 ] [ 27 ] [ 28 ]
Because of the spacecraft's good health and the significant returns of data, XMM-Newton has received several mission extensions by ESA's Science Programme Committee. The first extension came during November 2003 and extended operations through March 2008. [ 29 ] The second extension was approved in December 2005, extending work through March 2010. [ 30 ] A third extension was passed in November 2007, which provided for operations through 2012. As part of the approval, it was noted that the satellite had enough on-board consumables (fuel, power and mechanical health) to theoretically continue operations past 2017. [ 31 ] The fourth extension in November 2010 approved operations through 2014. [ 32 ] A fifth extension was approved in November 2014 and affirmed in November 2016, continuing operations through 2018. [ 33 ] [ 34 ] A sixth extension was approved in December 2017, continuing operations through the end of 2020. [ 35 ] A seventh extension was approved in November 2018, continuing operations through the end of 2022. [ 36 ] An eighth extension was approved in March 2023, continuing operations through the end of 2026, with indicative extension up to 2029. [ 5 ]
XMM-Newton is a 10.8-metre (35 ft) long space telescope, and is 16.16 m (53 ft) wide with solar arrays deployed. At launch it weighed 3,764 kilograms (8,298 lb). [ 2 ] The spacecraft has three degrees of stabilisation, which allow it to aim at a target with an accuracy of 0.25 to 1 arcseconds . This stabilisation is achieved through the use of the spacecraft's Attitude & Orbit Control Subsystem . These systems also allow the spacecraft to point at different celestial targets, and can turn the craft at a maximum of 90 degrees per hour. [ 11 ] [ 24 ] The instruments on board XMM-Newton are three European Photon Imaging Cameras (EPIC), two Reflection Grating Spectrometers (RGS), and an Optical Monitor.
The spacecraft is roughly cylindrical in shape, and has four major components. At the fore of the spacecraft is the Mirror Support Platform , which supports the X-ray telescope assemblies and grating systems, the Optical Monitor, and two star trackers . Surrounding this component is the Service Module , which carries various spacecraft support systems: computer and electric busses , consumables (such as fuel and coolant ), solar arrays , the Telescope Sun Shield, and two S-band antennas. Behind these units is the Telescope Tube , a 6.8-metre (22 ft) long, hollow carbon fibre structure which provides exact spacing between the mirrors and their detection equipment. This section also hosts outgassing equipment on its exterior, which helps remove any contaminants from the interior of the satellite. At the aft end of spacecraft is the Focal Plane Assembly , which supports the Focal Plane Platform (carrying the cameras and spectrometers) and the data-handling, power distribution, and radiator assemblies. [ 37 ]
The three European Photon Imaging Cameras (EPIC) are the primary instruments aboard XMM-Newton . The system is composed of two MOS – CCD cameras and a single pn -CCD camera, with a total field of view of 30 arcminutes and an energy sensitivity range between 0.15 and 15 keV ( 82.7 to 0.83 ångströms ). Each camera contains a six-position filter wheel , with three types of X-ray-transparent filters, a fully open and a fully closed position; each also contains a radioactive source used for internal calibration. The cameras can be independently operated in a variety of modes, depending on the image sensitivity and speed needed, as well as the intensity of the target. [ 38 ] [ 39 ] [ 40 ]
The two MOS-CCD cameras are used to detect low-energy X-rays. Each camera is composed of seven silicon chips (one in the centre and six circling it), with each chip containing a matrix of 600 × 600 pixels , giving the camera a total resolution of about 2.5 megapixels . As discussed above , each camera has a large adjacent radiator which cools the instrument to an operating temperature of −120 °C (−184 °F). They were developed and built by the University of Leicester Space Research Centre and EEV Ltd . [ 25 ] [ 38 ] [ 40 ]
The pn-CCD camera is used to detect high-energy X-rays, and is composed of a single silicon chip with twelve individual embedded CCDs. Each CCD is 64 × 189 pixels, for a total capacity of 145,000 pixels. At the time of its construction, the pn-CCD camera on XMM-Newton was the largest such device ever made, with a sensitive area of 36 cm 2 (5.6 sq in). A radiator cools the camera to −90 °C (−130 °F). This system was made by the Astronomisches Institut Tübingen , the Max Planck Institute for Extraterrestrial Physics , and PNSensor, all of Germany. [ 38 ] [ 41 ] [ 42 ]
The EPIC system records three types of data about every X-ray that is detected by its CCD cameras. The time that the X-ray arrives allows scientists to develop light curves , which projects the number of X-rays that arrive over time and shows changes in the brightness of the target. Where the X-ray hits the camera allows for a visible image to be developed of the target. The amount of energy carried by the X-ray can also be detected and helps scientists to determine the physical processes occurring at the target, such as its temperature, its chemical make-up, and what the environment is like between the target and the telescope. [ 43 ]
The Reflection Grating Spectrometers (RGS) are composed of two Focal Plane Cameras and their associated Reflection Grating Arrays. This system is used to build X-ray spectral data and can determine the elements present in the target, as well as the temperature, quantity and other characteristics of those elements. The RGS system operates in the 2.5 to 0.35 keV ( 5 to 35 ångström ) range, which allows detection of carbon, nitrogen, oxygen, neon, magnesium, silicon and iron. [ 44 ] [ 45 ]
The Focal Plane Cameras each consist of nine MOS-CCD devices mounted in a row and following a curve called a Rowland circle . Each CCD contains 384 × 1024 pixels, for a total resolution of more than 3.5 megapixels. The total width and length of the CCD array was dictated by the size of the RGS spectrum and the wavelength range, respectively. Each CCD array is surrounded by a relatively massive wall, providing heat conduction and radiation shielding. Two-stage radiators cool the cameras to an operating temperature of −110 °C (−166 °F). The camera systems were a joint effort between SRON , the Paul Scherrer Institute , and MSSL , with EEV Ltd and Contraves Space providing hardware. [ 25 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ]
The Reflection Grating Arrays are attached to two of the primary telescopes. They allow approximately 50% of the incoming X-rays to pass unperturbed to the EPIC system, while redirecting the other 50% onto the Focal Plane Cameras. Each RGA was designed to contain 182 identical gratings, though a fabrication error left one with only 181. Because the telescope mirrors have already focused the X-rays to converge at the focal point, each grating has the same angle of incidence, and as with the Focal Plane Cameras, each grating array conforms to a Rowland circle. This configuration minimises focal aberrations. Each 10 × 20 cm (4 × 8 in) grating is composed of 1 mm (0.039 in) thick silicon carbide substrate covered with a 2,000- ångström (7.9 × 10 −6 in) gold film, and is supported by five beryllium stiffeners. The gratings contain a large number of grooves, which actually perform the X-ray deflection; each grating contains an average of 646 grooves per millimetre. The RGAs were built by Columbia University . [ 44 ] [ 45 ]
The Optical Monitor (OM) is a 30 cm (12 in) Ritchey–Chrétien optical/ultraviolet telescope designed to provide simultaneous observations alongside the spacecraft's X-ray instruments. The OM is sensitive between 170 and 650 nanometres in a 17 × 17 arcminute square field of view co-aligned with the centre of the X-ray telescope's field of view. It has a focal length of 3.8 m (12 ft) and a focal ratio of ƒ/12.7. [ 48 ] [ 49 ]
The instrument is composed of the Telescope Module, containing the optics, detectors, processing equipment, and power supply; and the Digital Electronics Module, containing the instrument control unit and data processing units. Incoming light is directed into one of two fully redundant detector systems. The light passes through an 11-position filter wheel (one opaque to block light, six broad band filters, one white light filter, one magnifier, and two grisms ), then through an intensifier which amplifies the light by one million times, then onto the CCD sensor. The CCD is 384 × 288 pixels in size, of which 256 × 256 pixels are used for observations; each pixel is further subsampled into 8 × 8 pixels, resulting in a final product that is 2048 × 2048 in size. The Optical Monitor was built by the Mullard Space Science Laboratory with contributions from organisations in the United States and Belgium. [ 48 ] [ 49 ]
Feeding the EPIC and RGS systems are three telescopes designed specifically to direct X-rays into the spacecraft's primary instruments. The telescope assemblies each have a diameter of 90 cm (35 in), are 250 cm (98 in) in length, and have a base weight of 425 kg (937 lb). The two telescopes with Reflection Grating Arrays weigh an additional 20 kg (44 lb). Components of the telescopes include (from front to rear) the mirror assembly door, entrance and X-ray baffles , mirror module, electron deflector, a Reflection Grating Array in two of the assemblies, and exit baffle. [ 13 ] [ 50 ] [ 51 ] [ 52 ]
Each telescope consists of 58 cylindrical, nested Wolter Type-1 mirrors developed by Media Lario of Italy, each 600 mm (24 in) long and ranging in diameter from 306 to 700 mm (12.0 to 27.6 in), producing a total collecting area of 4,425 cm 2 (686 sq in) at 1.5 keV and 1,740 cm 2 (270 sq in) at 8 keV. [ 2 ] The mirrors range from 0.47 mm (0.02 in) thick for the innermost mirror to 1.07 mm (0.04 in) thick for the outermost mirror, and the separation between each mirror ranges from 1.5 to 4 mm (0.06 to 0.16 in) from innermost to outermost. [ 2 ] Each mirror was built by vapour-depositing a 250 nm layer of gold reflecting surface onto a highly polished aluminium mandrel , followed by electroforming a monolithic nickel support layer onto the gold. The finished mirrors were glued into the grooves of an Inconel spider, which keeps them aligned to within the five-micron tolerance required to achieve adequate X-ray resolution. The mandrels were manufactured by Carl Zeiss AG , and the electroforming and final assembly were performed by Media Lario with contributions from Kayser-Threde . [ 53 ]
Spacecraft three-axis attitude control is handled by the Attitude & Orbit Control System (AOCS), composed of four reaction wheels , four inertial measurement units , two star trackers , three fine Sun sensors , and three Sun acquisition sensors. The AOCS was provided by Matra Marconi Space of the United Kingdom. [ 2 ] [ 54 ] [ 55 ]
Coarse spacecraft orientation and orbit maintenance is provided by two sets of four 20- newton (4.5 lb f ) hydrazine thrusters (primary and backup). [ 2 ] The hydrazine thrusters were built by DASA-RI of Germany. [ 56 ]
The AOCS was upgraded in 2013 with a software patch ('4WD'), to control attitude using the 3 prime reaction wheels plus the 4th, spare wheel, unused since launch, with the aim of saving propellant to extend the spacecraft lifetime. [ 57 ] [ 58 ] In 2019 the fuel was predicted to last until 2030. [ 59 ]
Primary power for XMM-Newton is provided by two fixed solar arrays. The arrays are composed of six panels measuring 1.81 × 1.94 m (5.9 × 6.4 ft) for a total of 21 m 2 (230 sq ft) and a mass of 80 kg (180 lb). At launch, the arrays provided 2,200 W of power, and were expected to provide 1,600 W after ten years of operation. Deployment of each array took four minutes. The arrays were provided by Fokker Space of the Netherlands. [ 2 ] [ 60 ]
When direct sunlight is unavailable, power is provided by two nickel–cadmium batteries providing 24 A·h and weighing 41 kg (90 lb) each. The batteries were provided by SAFT of France. [ 2 ] [ 60 ]
The cameras are accompanied by the EPIC Radiation Monitor System (ERMS), which measures the radiation environment surrounding the spacecraft; specifically, the ambient proton and electron flux. This provides warning of damaging radiation events to allow for automatic shut-down of the sensitive camera CCDs and associated electronics. The ERMS was built by the Centre d'Etude Spatiale des Rayonnements of France. [ 13 ] [ 38 ] [ 40 ]
The Visual Monitoring Cameras (VMC) on the spacecraft were added to monitor the deployment of solar arrays and the sun shield, and have additionally provided images of the thrusters firing and outgassing of the Telescope Tube during early operations. Two VMCs were installed on the Focal Plane Assembly looking forward. The first is FUGA-15, a black and white camera with high dynamic range and 290 × 290 pixel resolution. The second is IRIS-1, a colour camera with a variable exposure time and 400 × 310 pixel resolution. Both cameras measure 6 × 6 × 10 cm (2.4 × 2.4 × 3.9 in) and weight 430 g (15 oz). They use active pixel sensors , a technology that was new at the time of XMM-Newton 's development. The cameras were developed by OIC–Delft and IMEC , both of Belgium. [ 56 ] [ 61 ]
XMM-Newton mission control is located at the European Space Operations Centre (ESOC) in Darmstadt , Germany. Two ground stations , located in Perth and Kourou , are used to maintain continuous contact with the spacecraft through most of its orbit. Back-up ground stations are located in Villafranca del Castillo , Santiago , and Dongara . Because XMM-Newton contains no on-board data storage, science data is transmitted to these ground stations in real time. [ 20 ]
Data is then forwarded to the European Space Astronomy Centre 's Science Operations Centre in Villafranca del Castillo, Spain, where pipeline processing has been performed since March 2012. Data is archived at the ESAC Science Data Centre, [ 62 ] and distributed to mirror archives at the Goddard Space Flight Center and the XMM-Newton Survey Science Centre (SSC) at the Institut de Recherche en Astrophysique et Planétologie . Prior to June 2013, the SSC was operated by the University of Leicester , but operations were transferred due to a withdrawal of funding by the United Kingdom. [ 16 ] [ 63 ]
The space observatory was used to discover the galaxy cluster XMMXCS 2215-1738 , 10 billion light years away from Earth. [ 64 ]
The object SCP 06F6 , discovered by the Hubble Space Telescope (HST) in February 2006, was observed by XMM-Newton in early August 2006 and appeared to show an X-ray glow around it [ 65 ] two orders of magnitude more luminous than that of supernovae . [ 66 ]
In June 2011, a team from the University of Geneva , Switzerland , reported XMM-Newton seeing a flare that lasted four hours at a peak intensity of 10,000 times the normal rate, from an observation of Supergiant Fast X-ray Transient IGR J18410-0535 , where a blue supergiant star shed a plume of matter that was partly ingested by a smaller companion neutron star with accompanying X-ray emissions. [ 67 ] [ 68 ]
In February 2013 it was announced that XMM-Newton along with NuSTAR have for the first time measured the spin rate of a supermassive black hole , by observing the black hole at the core of galaxy NGC 1365 . At the same time, it verified the model that explains the distortion of X-rays emitted from a black hole. [ 69 ] [ 70 ]
In February 2014, separate analyses extracted from the spectrum of X-ray emissions observed by XMM-Newton a monochromatic signal around 3.5 keV. [ 71 ] [ 72 ] This signal is coming from different galaxy clusters , and several scenarios of dark matter can justify such a line. For example, a 3.5 keV candidate annihilating into 2 photons, [ 73 ] or a 7 keV dark matter particle decaying into photon and neutrino. [ 74 ]
In June 2021, one of the largest X-ray surveys using the European Space Agency's XMM-Newton space observatory published initial findings, mapping the growth of 12,000 supermassive black holes at the cores of galaxies and galaxy clusters. [ 75 ]
|
https://en.wikipedia.org/wiki/XMM-Newton
|
The XMM Cluster Survey ( XCS ) is a serendipitous X-ray galaxy cluster survey being conducted using archival data taken by ESA’s XMM-Newton satellite. Galaxy clusters trace the large scale structure of the universe , and their number density evolution with redshift provides a way to measure cosmological parameters, independent of cosmic microwave background experiments or supernovae cosmology projects. [ 1 ]
The collaboration is based in the United Kingdom and this is also where the majority of researchers are based. However, there are members of the collaboration across Europe and the Atlantic.
The XCS collaboration have detected 503 clusters serendipitously in XMM-Newton observations. [ 2 ]
|
https://en.wikipedia.org/wiki/XMM_Cluster_Survey
|
XMPP Standards Foundation ( XSF ) is the foundation in charge of the standardization of the protocol extensions of XMPP , the open standard of instant messaging and presence of the IETF .
The XSF was originally called the Jabber Software Foundation ( JSF ). The Jabber Software Foundation was originally established to provide an independent, non-profit, legal entity to support the development community around Jabber technologies (and later XMPP). Originally its main focus was on developing JOSL, the Jabber Open Source License [ 1 ] (since deprecated), and an open standards process for documenting the protocols used in the Jabber/XMPP developer community. Its founders included Michael Bauer and Peter Saint-Andre .
Members of the XSF vote on acceptance of new members, a technical Council, and a Board of Directors. However, membership is not required to publish, view, or comment on the standards that it promulgates. The unit of work at the XSF is the XMPP Extension Protocol (XEP); XEP-0001 [ 2 ] specifies the process for XEPs to be accepted by the community. Most of the work of the XSF takes place on the XMPP Extension Discussion List, [ 3 ] the jdev and the xsf chat room . [ 4 ]
The Board of Directors [ 5 ] of the XMPP Standards Foundation oversees the business affairs of the organization. As elected by the XSF membership, the Board of Directors for 2020-2021 consists of the following individuals:
The XMPP Council [ 6 ] is the technical steering group that approves XMPP Extension Protocols, as governed by the XSF Bylaws and XEP-0001 . The Council is elected by the members of the XMPP Standards Foundation each year in September. The XMPP Council (2020–2021) consists of the following individuals:
There are currently 66 elected members [ 7 ] of the XSF.
The following individuals are emeritus members of the XMPP Standards Foundation:
One of the most important outputs of the XSF is a series [ 8 ] of "XEPs", or XMPP Extension Protocols, auxiliary protocols defining additional features. Some have chosen to pronounce "XEP" as if it were spelled "JEP", rather than "ZEP", in order to keep with a sense of tradition. Some XEPs of note include:
The XSF biannually holds a XMPP Summit where software and protocol developers from all around the world meet and share ideas and discuss topics around the XMPP protocol and the XEPs. In winter it takes place around the FOSDEM event in Brussels, Belgium and in summer it takes place around the RealtimeConf event in Portland, USA. These meetings are open to anyone and focus on discussing both technical and non-technical issues that the XSF members wish to discuss with no costs attached for the participants. However the XSF is open to donations. The first XMPP Summit took place on July 24 and 25, 2006, in Portland. [ 18 ]
|
https://en.wikipedia.org/wiki/XMPP_Standards_Foundation
|
The XO Project is an international team of amateur and professional astronomers tasked with identifying extrasolar planets. They are led by Peter R. McCullough of the Space Telescope Science Institute . [ 1 ] It is primarily funded by NASA's Origins Program and the Director's Discretionary Fund of the Space Telescope Science Institute. [ 2 ] [ 3 ]
Preliminary identification of possible star candidates starts at the Haleakala telescope in Hawaii by a team of professional astronomers. Once they identify a star that dims slightly from time to time (the transit method ), the information is forwarded to a team of amateur astronomers who then investigate for additional evidence suggesting this dimming is caused by a transiting planet. Once enough data is collected, it is forwarded to the University of Texas McDonald Observatory to confirm the presence of a transiting planet by a second team of professional astronomers. [ 2 ]
McCullough and his team employed a relatively inexpensive telescope called the XO Telescope , made from commercial equipment, to search for extrasolar planets. The construction of the one-of-a-kind telescope cost $60,000 for the hardware, and much more than that for the associated software. [ 4 ] The telescope consists of two 200-millimeter telephoto camera lenses , and resembles binoculars in shape. It is similar to the TrES survey telescope. It stands on the summit of the Haleakalā volcano and 3,054 m (10,000 foot) in Hawaii . [ 1 ] Their first discovery of a Jupiter-sized planet orbiting a Sun-like star 600 light-years from Earth in the constellation Corona Borealis—XO-1b—was reported May 16, 2006 on Newswise .
In 2016 three similar double telescopes were operating, two in Spain and one in Utah. [ 5 ]
The XO telescope has discovered six objects so far, five are hot Jupiter planets and one, XO-3b , may be a brown dwarf .
A subset of XO light curves are available at the NASA Exoplanet Archive .
|
https://en.wikipedia.org/wiki/XO_Project
|
XPLM Publisher is a commercial authoring and publishing software developed and sold by XPLM . It combines the Oxygen XML Editor with Oracle Agile PLM and helps technical writers to create, manage, and publish technical product documentation (e.g., user guides ) in various formats and layouts. The XPLM Publisher follows the Darwin Information Typing Architecture (DITA) and the methods of single source publishing . This allows the same source content to be (re-)used across multiple forms of media and more than one time. [ 1 ]
The main aspect of the XPLM Publisher is the tight integration with a PLM system as content management system and " single source of truth ". Thus the technical writers are only provided with always valid and released engineering information. [ 2 ] [ 3 ]
This software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/XPLM_Publisher
|
xPL is an open protocol intended to permit the control and monitoring of home automation devices. The primary design goal of xPL is to provide a rich set of features and functionality, whilst maintaining an elegant, uncomplicated message structure. The protocol includes complete discovery and auto-configuration capabilities which support a fully "plug-n-play" architecture - essential to ensure a good end-user experience.
xPL benefits from a strongly specified message structure, required to ensure that xPL-enabled devices from different vendors are able to communicate without the risk of incompatibilities. [ 1 ]
Communications between xPL applications on a Local Area Network (LAN) use UDP on port 3865 . [ 2 ]
xPL development has primarily occurred in the DIY community, where users have written connecting software to existing protocols and devices. Some examples include bridges to other home automation protocols like Z-Wave [ 3 ] and UPB . [ 4 ] Commercially, the Logitech SqueezeCenter software for the Squeezebox supports xPL. [ 5 ]
Different devices communicate using xPL within a local network.
They all broadcast their messages on the IANA registered UDP port 3865
for the other devices to handle.
As on modern operating systems only one program can listen to a given port,
there is a need for a hub forwarding the messages to all devices on the same machine.
The devices register to the hub on a private UDP port and the hub then forwards all incoming message to these private ports.
A hub is the first xPL component required on a machine running xPL devices.
All devices send a heartbeat message to the hub on a regular basis (typically 5 minutes).
When disconnecting, they also can send a special heartbeat end message for the hub to radiate them out of his list.
The hub forwards all messages to every device in its list.
There is no filtering of messages: a blind redistribution of all messages is carried out.
Applications add functionality to a home automation solution such as light control, sun rise/set, weather information and so on.
A device chooses a free UDP port and sends heartbeat messages from that port to the hub on the IANA registered UDP port 3865.
From that time, the devices listens for messages on its private port but sends messages as broadcast on the xPL port 3865.
The message types are one of the following:
An extensive list of applications can be downloaded from the net. Tooklits are also provided for users wishing to develop their own devices.
It is assumed that your network protocol is UDP/IP but this is by no means a requirement.
If you wish for your XPL message to cross from one transport medium to another (UDP/IP to RS232 for example) then you will need a Bridge.
On Windows, xPL HAL processes incoming xPL messages
and executes scripts to perform a wide variety of tasks.
Configuration is done either through a Windows-based Manager or via a browser.
xPL HAL also includes an xPL Configuration Manager.
On Linux or Mac OS, xpl-central monitors all xPL messages
and can trigger other messages based on a set of rules stored in an XML file.
The xPL protocol can operate over a variety of transmission media, including Ethernet , RS232 and RS485.
All xPL devices broadcast their messages over UDP , on IANA registered port 3865.
But, as only one application can listen at a time to a given port, the xPL protocol uses a hub to retransmit all broadcast messages to the different applications on the same machine.
The applications subscribe to the hub on a free port by sending heartbeat messages which specifies the port they are listening to.
In turn, the hub forwards all xPL broadcast messages it receives to every application in his list.
Lite on the wire, by design
xPL Messages are line based, with each line ending with a linefeed (ASCII: 10 decimal) character.
The following is an example of a typical xPL Message:
All messages are made out of:
In the header block, the target name is replaced by the wildcard symbol "*" for broadcast messages.
This is the case for tigger and status messages.
xPL uses well defined message schemas to ensure that applications from different vendors can interact sensibly. Message Schemas are extensible, and define not only the elements which should be present in a message, but also the order in which they appear.
This allows simple devices and applications to parse messages more easily.
All of the existing message schemas can be found on the xPL project home page .
Developers looking to create a new schema are invited to do so. [ 7 ]
|
https://en.wikipedia.org/wiki/XPL_Protocol
|
XPath 3 is the latest version of the XML Path Language , a query language for selecting nodes in XML documents. It supersedes XPath 1.0 and XPath 2.0 .
XPath 3.0 became a W3C Recommendation on 8 April 2014, while XPath 3.1 became a W3C Recommendation on 21 March 2017.
Compared to XPath 2.0 , XPath 3.0 adds the following new features:
XPath 3.1 mainly adds support for array and map ( associative array ) data types. These types and their associated functionality are intended to ease working with JSON data.
Another innovation is the arrow operator => for function chaining. For example, the XPath 2.0 expression
can now be written
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/XPath_3
|
The X-ray Polarimeter Satellite ( XPoSat ) is an Indian Space Research Organisation (ISRO)-manufactured space observatory to study polarisation of cosmic X-rays . It was launched on 1 January 2024 on a PSLV rocket, [ 8 ] and it has an expected operational lifespan of at least five years. [ 9 ] [ 10 ]
The telescope was developed by the Raman Research Institute (RRI) in close collaboration with U R Rao Satellite Centre (URSC). [ 11 ] Per ISRO, this mission will complement the efforts of US space agency NASA , which launched its Imaging X-ray Polarimetry Explorer (IXPE) in 2021 by observing space events across a broad energy range of 2–30 keV. [ 12 ] [ 13 ]
Studying how radiation is polarised gives away the nature of its source, including the strength and distribution of its magnetic fields and the nature of other radiation around it. XPoSat will study the 50 locally brightest (known) sources in the universe consisting of, variously, pulsars , black hole X-ray binaries , active galactic nuclei , neutron stars and non-thermal supernova remnants. [ 9 ] [ 14 ] The observatory was placed in a circular low Earth orbit of 500–700 km (310–430 mi). [ 9 ] [ 2 ] The payloads onboard XPoSat will observe the X-Ray sources during its transit through the Earth's eclipse period. [ 15 ]
The XPoSat project began in September 2017 with Indian Space Research Organisation (ISRO) grant of ₹95,000,000 . Preliminary Design Review (PDR) of the XPoSat including the POLIX payload was completed in September 2018, followed by preparation of POLIX Qualification Model and beginning of some of its Flight Model components fabrication. [ 16 ] [ 17 ]
XPoSAT was successfully launched aboard PSLV-C58 on 1 January 2024 at 9:10 am IST. The launch was precise, leaving only a deviation of (±) 3 km. Following the launch, the final 4th stage of the PSLV dropped to a 350 x 350 km orbit to facilitate its use as PSLV Orbital Experimental Module POEM-3. [ 18 ] [ 19 ]
The XSPECT payload on XPoSat captured its first light from the Cassiopeia A (Cas A), a supernova remnant somewhat over 11,000 light years away on 5 January 2024. During its performance verification phase, XSPECT was directed towards this standard celestial source used for instrument evaluation which is among the brightest radio frequency sources in the sky. The observation commenced on 5 January 2024, capturing the supernova remnant's emission lines corresponding to elements such as magnesium , silicon , sulphur , argon , calcium , and iron . [ 20 ] [ 21 ]
XPoSat's POLIX sensor has started making scientific observations including first-ever data of x-ray polarisation of the Crab Pulsar , its first subject. The observation, which verified the POLIX instrument's operation, took place between January 15 and 18, 2024. POLIX monitored this fast-spinning neutron star in the Crab Nebula that releases roughly thirty X-ray pulses per second. Through the identification of polarization in its incoming X-rays, POLIX provides fresh perspectives on the physical emission processes at the surface of neutron stars. On 10 January 2024, the instrument was gradually turned on. [ 22 ] [ 23 ]
In response to a massive Solar Flare in May 2024, XpoSAT, along with Aditya-L1 and the Chandrayaan-2 Orbiter collected data on the event. XSPECT was used in conjunction with data from ground based observatories to provide fast timed and good spectroscopic results in the X-Ray spectra. [ 24 ]
On March 19, 2025, the XSPECT instrument detected a rare thermonuclear “burst” peaking in just a few seconds and fading over about 20 seconds,followed about 16 minutes later by a much longer and more powerful event called a “superburst” from a neutron star system named 4U 160852 , located about 4,000 light-years from Earth.XSPECT’s detailed observations showed the neutron star’s surface temperature during the bursts reached around 20 million degrees Kelvin, with a radius close to 9.3 kilometers.The data also suggest special processes like Compton scattering might be involved in the superburst’s high brightness and slow fade.The superburst was also observed by the MAXI experiment on the ISS . [ 25 ]
Two payloads of XPoSat are hosted on a modified IMS-2 satellite bus . [ 9 ] Primary scientific payload is Polarimeter Instrument in X-rays (POLIX) to study the degree and angle of polarisation of about 50 locally brightest astronoThe superburst was also observed by MAXI.mical X-ray sources of different types during its mission in the energy range 8-30 keV . [ 2 ] [ 26 ] POLIX, a 125 kg (276 lb) instrument, [ 9 ] was developed by the Raman Research Institute . [ 14 ] [ 2 ] [ 26 ] [ 27 ]
POLIX is the primary scientific payload aboard XPoSat. It is a Thomson X-ray polarimeter, which measures the degree and angle of polarization (polarimetry parameters) of astronomical sources in the medium X-ray range (8-30 keV). [ 28 ] It has been developed by Raman Research Institute .
Its science objectives are to measure: [ 27 ]
The experiment configuration consists of a collimator , central low Z (lithium, lithium hydride or beryllium ) scatterer surrounded by xenon filled four X-ray proportional counters as X-ray detectors which collects the scattered X-ray photons. [ 11 ] The instrument is rotated along the viewing axis leading to the measurement of the azimuthal distribution of the scattered X-ray photons which gives information on polarisation. Polarised X-rays will produce an azimuthal modulation in the count rate as opposed to uniform azimuthal distribution of count rate for unpolarised X-rays. POLIX has four independent detectors, each with its own front end and processing electronics. Localization of the X-ray photon in the detectors is carried out by the method of charge division in a set of resistive anode wires connected in series.
The prime objects for observation with this instrument are the X-ray bright accretion powered neutron stars , accreting black holes in different spectral states, rotation powered pulsars, magnetars , and active galactic nuclei. This instrument bridges an energy gap in detection capability, between the soft X-ray polarimeters utilising Bragg reflection ( OSO-8 ) or Photoelectron tracks ( IXPE ), and hard X-ray polarimeters using Compton scattering such as the Cadmium Zinc Telluride Imager (CZTI) on AstroSat .
XSPECT is the secondary payload on XPoSat. It measures spectroscopic information, timing of soft X-rays and electromagnetic spectrum generated by different types of matter. [ 29 ] [ 12 ] XSPECT is designed to pursue timing studies of soft X-rays (0.8-15 keV), [ 28 ] complementary to what the Large Area X-ray Proportional Counter (LAXPC) does at high energies on AstroSat, while simultaneously providing adequate spectral resolution in the 1-20 keV band. It has an energy resolution of <200 eV at 5.9 keV (-20 °C) and a timing resolution of ~2 msec. It has been developed by the Space Astronomy Group of the U R Rao Satellite Centre .
The detector achieves modest effective area without the use of focusing optics using the large area Swept Charge Devices (SCD), a variant of X-ray charge-coupled Devices (CCDs). SCDs permit fast readouts (10–100 kHz) and moderately good spectral resolution at the cost of a position sensitivity. These devices are unique in requiring very benign cooling requirement (requiring only passive cooling) unlike traditional X-ray CCDs.
Key science objectives of XSPECT include understanding long-term behavior of X-ray sources through correlation of timing characteristics with spectral state changes and emission line variations.
{ INDIA’S PIONEERING X-RAY POLARIMETRY MISSION }
|
https://en.wikipedia.org/wiki/XPoSat
|
Excess-3 , 3-excess [ 1 ] [ 2 ] [ 3 ] or 10-excess-3 binary code (often abbreviated as XS-3 , [ 4 ] 3XS [ 1 ] or X3 [ 5 ] [ 6 ] ), shifted binary [ 7 ] or Stibitz code [ 1 ] [ 2 ] [ 8 ] [ 9 ] (after George Stibitz , [ 10 ] who built a relay-based adding machine in 1937 [ 11 ] [ 12 ] ) is a self-complementary binary-coded decimal (BCD) code and numeral system . It is a biased representation . Excess-3 code was used on some older computers as well as in cash registers and hand-held portable electronic calculators of the 1970s, among other uses.
Biased codes are a way to represent values with a balanced number of positive and negative numbers using a pre-specified number N as a biasing value. Biased codes (and Gray codes ) are non-weighted codes. In excess-3 code, numbers are represented as decimal digits, and each digit is represented by four bits as the digit value plus 3 (the "excess" amount):
To encode a number such as 127, one simply encodes each of the decimal digits as above, giving (0100, 0101, 1010).
Excess-3 arithmetic uses different algorithms than normal non-biased BCD or binary positional system numbers. After adding two excess-3 digits, the raw sum is excess-6. For instance, after adding 1 (0100 in excess-3) and 2 (0101 in excess-3), the sum looks like 6 (1001 in excess-3) instead of 3 (0110 in excess-3). To correct this problem, after adding two digits, it is necessary to remove the extra bias by subtracting binary 0011 (decimal 3 in unbiased binary) if the resulting digit is less than decimal 10, or subtracting binary 1101 (decimal 13 in unbiased binary) if an overflow (carry) has occurred. (In 4-bit binary, subtracting binary 1101 is equivalent to adding 0011 and vice versa.) [ 14 ]
The primary advantage of excess-3 coding over non-biased coding is that a decimal number can be nines' complemented [ 1 ] (for subtraction) as easily as a binary number can be ones' complemented : just by inverting all bits. [ 1 ] Also, when the sum of two excess-3 digits is greater than 9, the carry bit of a 4-bit adder will be set high. This works because, after adding two digits, an "excess" value of 6 results in the sum. Because a 4-bit integer can only hold values 0 to 15, an excess of 6 means that any sum over 9 will overflow (produce a carry-out).
Another advantage is that the codes 0000 and 1111 are not used for any digit. A fault in a memory or basic transmission line may result in these codes. It is also more difficult to write the zero pattern to magnetic media. [ 1 ] [ 15 ] [ 11 ]
BCD 8-4-2-1 to excess-3 converter example in VHDL :
|
https://en.wikipedia.org/wiki/XS-3_code
|
XT9 is a text predicting and correcting system for mobile devices with full keyboards rather than the 3x4 keypad on old phones. [ 1 ] It was originally developed by Tegic Communications , now part of Nuance Communications . [ 2 ] It was originally created for devices with styluses , but is now commonly used for touch screen devices. It is a successor to T9 , a popular predictive text algorithm for mobile phones with only numeric pads. A small XT9 label on can be seen certain phones, mainly on HTC phones . [ citation needed ]
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/XT9
|
XTX is a computer-on-module (COM) standard for x86 -based embedded devices . XTX adds PCI-Express , SATA , and LPC capabilities. The standard was promulgated by Advantech Corporation, Ampro, [ 1 ] and Congatec .
This computer hardware article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/XTX
|
XView is a widget toolkit from Sun Microsystems introduced in 1988. It provides an OPEN LOOK user interface for X Window System applications, with an object-oriented application programming interface (API) for the C programming language . Its interface, controls, and layouts are very close to that of the earlier SunView window system, making it easy to convert existing applications from SunView to X. Sun also produced the User Interface Toolkit (UIT), a C++ API to XView.
The XView source code has been freely available since the early 1990s, making it the "first open-source professional-quality X Window System toolkit". [ 1 ] XView was later abandoned by Sun in favor of Motif (the basis of CDE ), and more recently GTK+ (the basis of GNOME ).
XView was reputedly the first system to use right-button context menus , [ 1 ] which are now ubiquitous among computer user interfaces.
This software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/XView
|
XW10508 is an orally active prodrug of esketamine , an NMDA receptor antagonist , which is under development for the treatment of major depressive disorder and chronic pain . [ 1 ] [ 4 ] [ 2 ] [ 3 ] It is taken by mouth . [ 1 ] [ 2 ] [ 3 ]
The drug is a novel esketamine analogue and conjugate that acts as a prodrug of esketamine. [ 3 ] Esketamine, and by extension XW10508, is an NMDA receptor antagonist and indirect AMPA receptor activator. [ 1 ] [ 5 ] XW10508 is being developed as once-daily orally administered extended-release and immediate-release formulations with misuse resistance. [ 1 ] [ 3 ]
As of August 2024, XW10508 is in phase 2 clinical trials for major depressive disorder and is in phase 1 clinical trials for chronic pain. [ 1 ] [ 4 ] [ 2 ] However, no recent development has been reported for these indications. [ 1 ] The drug is being developed by XWPharma, which was previously known as XW Laboratories. [ 1 ] [ 4 ] [ 2 ] It is being developed in Australia . [ 1 ] [ 2 ] The chemical structure of XW10508 does not yet seem to have been disclosed. [ 1 ]
|
https://en.wikipedia.org/wiki/XW10508
|
The XYZ file format is a chemical file format . There is no formal standard and several variations exist, but a typical XYZ format specifies the molecule geometry by giving the number of atoms with Cartesian coordinates that will be read on the first line, a comment on the second, and the lines of atomic coordinates in the following lines. [ 1 ] The file format is used in computational chemistry programs for importing and exporting geometries. The units are generally in ångströms . Some variations include using atomic numbers instead of atomic symbols, or skipping the comment line. Files using the XYZ format conventionally have the .xyz extension.
The formatting of the .xyz file format is as follows:
Connectivity information in the XYZ file format is implied rather than explicit. According to the main page for XYZ (part of XMol),
Note that the XYZ format doesn't contain connectivity information. This intentional omission allows for greater flexibility: to create an XYZ file, you don't need to know where a molecule's bonds are; you just need to know where its atoms are. Connectivity information is generated automatically for XYZ files as they are read into XMol-related applications. Briefly, if the distance between two atoms is less than the sum of their covalent radii, they are considered bonded. [ 2 ]
The pyridine molecule can be described in the XYZ format by the following:
Most molecule viewers such as Jmol and VMD can show animations using .xyz files. The following is an example xyz format for m successive snapshot which can be rendered as an animation:
Note that the xyz standard does not require that the number or chemical nature of atoms should be the same at subsequent snapshots, which allows for atoms disappearing from or coming into the field of view during the animation.
|
https://en.wikipedia.org/wiki/XYZ_file_format
|
In combinatorial mathematics, the XYZ inequality , also called the Fishburn–Shepp inequality , is an inequality for the number of linear extensions of finite partial orders . The inequality was conjectured by Ivan Rival and Bill Sands in 1981. It was proved by Lawrence Shepp in Shepp (1982) . An extension was given by Peter Fishburn in Fishburn (1984) .
It states that if x , y , and z are incomparable elements of a finite poset , then
where P (A) is the probability that a linear order extending the partial order ≺ {\displaystyle \prec } has the property A.
In other words, the probability that x ≺ z {\displaystyle x\prec z} increases if one adds the condition that x ≺ y {\displaystyle x\prec y} . In the language of conditional probability ,
The proof uses the Ahlswede–Daykin inequality .
|
https://en.wikipedia.org/wiki/XYZ_inequality
|
In February 2024, a malicious backdoor was introduced to the Linux build of the xz utility within the liblzma library in versions 5.6.0 and 5.6.1 by an account using the name "Jia Tan". [ b ] [ 4 ] The backdoor gives an attacker who possesses a specific Ed448 private key remote code execution through OpenSSH on the affected Linux system. The issue has been given the Common Vulnerabilities and Exposures number CVE - 2024-3094 and has been assigned a CVSS score of 10.0, the highest possible score. [ 5 ]
While xz is commonly present in most Linux distributions , at the time of discovery the backdoored version had not yet been widely deployed to production systems, but was present in development versions of major distributions. [ 6 ] The backdoor was discovered by the software developer Andres Freund, who announced his findings on 29 March 2024. [ 7 ]
Microsoft employee and PostgreSQL developer Andres Freund reported the backdoor after investigating a performance regression in Debian Sid . [ 8 ] Freund noticed that SSH connections were generating an unexpectedly high amount of CPU usage as well as causing errors in Valgrind , [ 9 ] a memory debugging tool. [ 10 ] Freund reported his finding to Openwall Project 's open source security mailing list, [ 9 ] which brought it to the attention of various software vendors. [ 10 ] The attacker made efforts to obfuscate the code, [ 11 ] as the backdoor consists of multiple stages that act together. [ 12 ]
Once the compromised version is incorporated into the operating system, it alters the behavior of OpenSSH 's SSH server daemon by abusing the systemd library, allowing the attacker to gain administrator access. [ 12 ] [ 10 ] According to the analysis by Red Hat , the backdoor can "enable a malicious actor to break sshd authentication and gain unauthorized access to the entire system remotely". [ 13 ]
A subsequent investigation found that the campaign to insert the backdoor into the XZ Utils project was a culmination of approximately three years of effort, between November 2021 and February 2024, [ 14 ] by a user going by the name Jia Tan and the nickname JiaT75 to gain access to a position of trust within the project. After a period of pressure on the founder and head maintainer to hand over the control of the project via apparent sock puppetry , Jia Tan gained the position of co-maintainer of XZ Utils and was able to sign off on version 5.6.0, which introduced the backdoor, and version 5.6.1, which patched some anomalous behavior that could have been apparent during software testing of the operating system. [ 10 ]
Some of the suspected sock puppetry pseudonyms include accounts with usernames like Jigar Kumar , krygorin4545 , and misoeater91 . It is suspected that the names Jia Tan , as well as the supposed code author Hans Jansen (for versions 5.6.0 and 5.6.1), are pseudonyms chosen by the participants of the campaign. Neither have any sort of visible public presence in software development beyond the short few years of the campaign. [ 15 ] [ 16 ]
The backdoor was notable for its level of sophistication and for the fact that the perpetrator practiced a high level of operational security for a long period of time while working to attain a position of trust. American security researcher Dave Aitel has suggested that it fits the pattern attributable to APT29 , an advanced persistent threat actor believed to be working on behalf of the Russian SVR . [ 14 ] Journalist Thomas Claburn suggested that it could be any state actor or a non-state actor with considerable resources. [ 17 ]
The malicious code is known to be in 5.6.0 and 5.6.1 releases of the XZ Utils software package. The exploit remains dormant unless a specific third-party patch of the SSH server is used. Under the right circumstances this interference could potentially enable a malicious actor to break sshd authentication and gain unauthorized access to the entire system remotely . [ 13 ] The malicious mechanism consists of two compressed test files that contain the malicious binary code. These files are available in the git repository , but remain dormant unless extracted and injected into the program. [ 4 ] The code uses the glibc IFUNC mechanism to replace an existing function in OpenSSH called RSA_public_decrypt with a malicious version. OpenSSH normally does not load liblzma, but a common third-party patch used by several Linux distributions causes it to load libsystemd , which in turn loads lzma. [ 4 ] A modified version of build-to-host.m4 was included in the release tar file uploaded on GitHub , which extracts a script that performs the actual injection into liblzma . This modified m4 file was not present in the git repository; it was only available from tar files released by the maintainer separate from git. [ 4 ] The script appears to perform the injection only when the system is being built on an x86-64 Linux system that uses glibc and GCC and is being built via dpkg or rpm . [ 4 ]
The US federal Cybersecurity and Infrastructure Security Agency has issued a security advisory recommending that the affected devices should roll back to a previous uncompromised version. [ 18 ] Linux software vendors, including Red Hat, SUSE , and Debian , have reverted the affected packages to older versions. [ 13 ] [ 19 ] [ 20 ] GitHub disabled the mirrors for the xz repository before subsequently restoring them. [ 21 ]
Canonical postponed the beta release of Ubuntu 24.04 LTS and its flavours by a week and opted for a complete binary rebuild of all the distribution's packages. [ 22 ] Although the stable version of Ubuntu was not affected, upstream versions were. This precautionary measure was taken because Canonical could not guarantee by the original release deadline that the discovered backdoor did not affect additional packages during compilation. [ 23 ]
Computer scientist Alex Stamos opined that "this could have been the most widespread and effective backdoor ever planted in any software product", noting that had the backdoor remained undetected, it would have "given its creators a master key to any of the hundreds of millions of computers around the world that run SSH". [ 24 ] In addition, the incident also started a discussion regarding the viability of having critical pieces of cyberinfrastructure depend on unpaid volunteers. [ 25 ]
|
https://en.wikipedia.org/wiki/XZ_Utils_backdoor
|
In particle physics , the X charge (or simply X ) is a conserved quantum number associated with the SO(10) grand unification theory . It is thought to be conserved in strong , weak , electromagnetic , gravitational , and Higgs interactions. Because the X charge is related to the weak hypercharge , it varies depending on the helicity of a particle. For example, a left-handed quark has an X charge of +1, whereas a right-handed quark can have either an X charge of −1 (for up, charm and top quarks), or −3 (for down, strange and bottom quarks).
X is related to the difference between the baryon number B and the lepton number L (that is, B – L ), and the weak hypercharge Y W via the relation: X = 5 ( B − L ) − 2 Y W . {\displaystyle X=5(B-L)-2\,Y_{\text{W}}.}
Proton decay is a hypothetical form of radioactive decay , predicted by many grand unification theories . During proton decay, the common baryonic proton decays into lighter subatomic particles. However, proton decay has never been experimentally observed and is predicted to be mediated by hypothetical X and Y bosons . Many protonic decay modes have been predicted, one of which is shown below: p + ⟶ e + + π 0 {\displaystyle \mathrm {p} ^{+}\longrightarrow \mathrm {e} ^{+}+\pi ^{0}}
This form of decay violates the conservation of both baryon number and lepton number , however the X charge is conserved. Similarly, all experimentally confirmed forms of decay also conserve the X charge value.
The following table lists the X charge values for the standard model fermions and their antiparticles . Note that the CP conjugate of a fermion has the opposite X charge (e.g. e L vs. e R , X = −3 vs. +3).
The next table gives the X charge of the standard model bosons.
Although not part of the Standard Model, the GUT X and Y bosons also have zero X charge.
This particle physics –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/X_(charge)
|
The X Input Method (or XIM ) was the original input method framework for the X Window System . [ 1 ]
It predates IBus , Fcitx , SCIM , uim and IIIMF . The specification [ 2 ] is published most recently in 1994 by (and copyright held by) the X Consortium . Although rarely used today, XIM is historically notable and has been supported in the enterprise products of IBM [ 3 ] and Oracle . [ 4 ]
This software article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/X_Input_Method
|
X hyperactivation refers to the process in Drosophila by which genes on the X chromosome in male flies become twice as active as genes on the X chromosome in female flies.
In Drosophila , there is a stark difference between the X and Y chromosome as most genetic material has been lost on the Y chromosome. [ 1 ] Due to this, Drosophila relies on the genetic material of the X chromosome. Because male flies have a single X chromosome and female flies have two X chromosomes, the higher level of activation in males ensures that X chromosome genes are overall expressed at the same level as females. [ 1 ] X hyperactivation is one mechanism of dosage compensation , where organisms that use genetic sex determination systems balance the sex chromosome gene dosage between males and females. [ 2 ] X hyperactivation is regulated by the alternative splicing of a gene called sex-lethal . The gene was named sex-lethal due to its mutant phenotype which has little to no effect on males but results in the death of females due to X hyperactivation of the two X chromosomes.
In female Drosophila , the sex-lethal protein causes the female-specific splicing of the sex-lethal gene to produce more of the sex-lethal protein. This produces a positive feedback loop . In male Drosophila , there isn’t enough sex-lethal proteins to activate the female-specific splicing of the sex-lethal gene, and it goes through the "default" splicing. This means the section of the gene that is spliced out in females remains in males. This remaining portion contains an early stop codon resulting in no sex-lethal protein being made. [ 3 ] In females, the sex-lethal protein inhibits the male-specific lethal ( msl ) gene complex that would normally activate X-linked genes , increasing the male transcription rate. The msl gene complex was named due to the loss-of-function mutant that results in the improper increase in the male transcription rate that results in the death of males. [ 4 ] In males, the absence of the necessary amount of sex-lethal allows for the increase in the male transcription rate due to the msl gene complex no longer being inhibited. This allows the expression of the X chromosome to be "doubled," or hyperactivated, to match females' two X chromosomes. [ 5 ]
Up-regulation of the X chromosome has also been recorded in many mammals despite being most well known in Drosophilia . [ 6 ]
The second dosage compensation that occurs in mammals are the balancing of X’s and autosomes . This regulation occurs by the upregulation of Xa, which is the active X. The upregulation of the active X shows increases in the activation of transcription and elongation. The X chromosome, compared to an autosomal gene, contains more silent genes that control the amount of influence active genes have. RNA-seq data was preformed and the autosomal and X linked gene outputs were significantly different. This agrees with the fact that X dosage compensation is in respect to autosomes. The loss of an X chromosome leads to an aneuploidy effect which disrupts the entire cell in Drosophila . This effect leads to the disruption of MSL (male specific lethal) from binding onto its target site. To overcome this, the X chromosome is first hyperactivated. Then, the hyperactivated X chromosome facilitates the inversion of the aneuploidy effect to create a gene expression equality between males and females. Natural selection occurs efficiently in Drosophila so the genes that are dosage-sensitive are increased. The dosage-sensitive genes vary from species to species.
This genetics article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/X_hyperactivation
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.