text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
The Universal Soil Loss Equation (USLE) is a widely used mathematical model that describes soil erosion processes. [ 1 ]
Erosion models play critical roles in soil and water resource conservation and nonpoint source pollution assessments, including: sediment load assessment and inventory, conservation planning and design for sediment control , and for the advancement of scientific understanding. The USLE or one of its derivatives are main models used by United States government agencies to measure water erosion. [ 2 ]
The USLE was developed in the U.S., based on soil erosion data collected beginning in the 1930s by the U.S. Department of Agriculture
( USDA ) Soil Conservation Service (now the USDA Natural Resources Conservation Service ). [ 3 ] [ 4 ] The model has been used for decades for purposes of conservation planning both in the United States where it originated and around the world, and has been used to help implement the United States' multibillion-dollar conservation program. The Revised Universal Soil Loss Equation (RUSLE) [ 5 ] and the Modified Universal Soil Loss Equation (MUSLE) continue to be used for similar purposes.
The two primary types of erosion models are process-based models and empirically based models. Process-based (physically based) models mathematically describe the erosion processes of detachment, transport, and deposition and through the solutions of the equations describing those processes provide estimates of soil loss and sediment yields from specified land surface areas. Erosion science is not sufficiently advanced for there to exist completely process-based models which do not include empirical aspects. The primary indicator, perhaps, for differentiating process-based from other types of erosion models is the use of the sediment continuity equation discussed below. Empirical models relate management and environmental factors directly to soil loss and/or sedimentary yields through statistical relationships. Lane et al. [ 6 ] provided a detailed discussion regarding the nature of process-based and empirical erosion models, as well as a discussion of what they termed conceptual models , which lie somewhere between the process-based and purely empirical models. Current research effort involving erosion modeling is weighted toward the development of process-based erosion models. On the other hand, the standard model for most erosion assessment and conservation planning is the empirically based USLE, and there continues to be active research and development of USLE-based erosion prediction technology.
The USLE was developed from erosion plot and rainfall simulator experiments. The USLE is composed of six factors to predict the long-term average annual soil loss (A). The equation includes the rainfall erosivity factor (R), the soil erodibility factor (K), the topographic factors (L and S), and the cropping management factors (C and P). The equation takes the simple product form:
The USLE has another concept of experimental importance, the unit plot concept. The unit plot is defined as the standard plot condition to determine the soil's erodibility . These conditions are when the LS factor = 1 (slope = 9% and length = 22.1 m (72.6 ft) where the plot is fallow and tillage is up and down slope and no conservation practices are applied (CP=1). In this state:
A simpler method to predict K was presented by Wischmeier et al. [ 7 ] which includes the particle size of the soil, organic matter content, soil structure and profile permeability. The soil erodibility factor K can be approximated from a nomograph if this information is known. The LS factors can easily be determined from a slope effect chart by knowing the length and gradient of the slope. The cropping management factor (C) and conservation practices factor (P) are more difficult to obtain and must be determined empirically from plot data. They are described in soil loss ratios (C or P with / C or P without).
Various techniques have emerged over the last few decades to compute the five RUSLE factors. [ 8 ] However, determining the P factor has proven to be challenging as there is usually a lack of geospatial information on the specific soil conservation practices in a given region. Thus, to estimate the P factor value in the RUSLE formula, a combination of land use type and slope gradient is often used, where a lower value indicates more effective control of soil erosion. [ 9 ]
Creating field boundaries, such as stone walls, hedgerows, earth banks, and lynchets, effectively prevented or reduced soil erosion in pre-industrial agriculture. [ 10 ] Recently, a novel P-factor model for Europe has been developed from the data retrieved during a statistical survey that recorded the occurrence of stone walls and grass margins in EU countries. While this is one of the first efforts to incorporate cultural landscape features into a soil erosion model on a continental scale, the authors of the study pointed out several limitations, such as the small number of surveyed points and the chosen interpolation technique. [ 11 ] It has been demonstrated that landscape archaeology has the potential to fill this gap in the data about soil conservation practices using a GIS -based tool called Historic Landscape Characterisation [ 12 ] (HLC). Starting from the assumptions that the construction of field boundaries has always represented an effective method to limit soil erosion and that the efficiency of any conservation measures to mitigate soil erosion increases with the increasing of the slope, a new P factor equation has been developed integrating the HLC within the RUSLE model. In a recent study, modeling landscape archaeological data in a soil loss estimation equation enables deeper reflection on how historical strategies for soil management might relate to current environmental and climate conditions. [ 13 ]
|
https://en.wikipedia.org/wiki/Universal_Soil_Loss_Equation
|
Universal Systems Language ( USL ) is a systems modeling language and formal method for the specification and design of software and other complex systems. It was designed by Margaret Hamilton based on her experiences writing flight software for the Apollo program . [ 1 ] The language is implemented through the 001 Tool Suite software by Hamilton Technologies, Inc. [ 2 ] USL evolved from 001AXES which in turn evolved from AXES all of which are based on Hamilton's axioms of control. The 001 Tool Suite uses the preventive concept of Development Before the Fact (DBTF) for its life-cycle development process. DBTF eliminates errors as early as possible during the development process removing the need to look for errors after-the-fact.
USL was inspired by Hamilton's recognition of patterns or categories of errors occurring during Apollo software development. [ 3 ] [ 4 ]
Certain correctness guarantees are embedded in the USL grammar. [ 5 ]
USL is regarded by some users as more user-friendly than other formal systems. [ 6 ] It is not only a formalism for software, but also defines ontologies for common elements of problem domains, such as physical space and event timing.
[ 7 ] [ 8 ]
Primitive structures are universal in that they are able to be used to derive new abstract universal structures, functions or types. The process of deriving new objects (i.e., structures, types and functions) is equivalent to the process of deriving new types in a constructive type theory.
The process of developing a software system with USL together with its automation, the 001 Tool Suite (001), is as follows: define the system with USL, automatically analyze the definition with 001's analyzer to ensure that USL was used correctly, automatically generate much of the design and all of the implementation code with 001's generator. [ 9 ] [ 10 ] [ 11 ] [ 12 ] USL can be used to lend its formal support to other languages. [ 13 ]
|
https://en.wikipedia.org/wiki/Universal_Systems_Language
|
A universal Taylor series is a formal power series ∑ n = 1 ∞ a n x n {\displaystyle \sum _{n=1}^{\infty }a_{n}x^{n}} , such that for every continuous function h {\displaystyle h} on [ − 1 , 1 ] {\displaystyle [-1,1]} , if h ( 0 ) = 0 {\displaystyle h(0)=0} , then there exists an increasing sequence ( λ n ) {\displaystyle \left(\lambda _{n}\right)} of positive integers such that lim n → ∞ ‖ ∑ k = 1 λ n a k x k − h ( x ) ‖ = 0 {\displaystyle \lim _{n\to \infty }\left\|\sum _{k=1}^{\lambda _{n}}a_{k}x^{k}-h(x)\right\|=0} In other words, the set of partial sums of ∑ n = 1 ∞ a n x n {\displaystyle \sum _{n=1}^{\infty }a_{n}x^{n}} is dense (in sup-norm ) in C [ − 1 , 1 ] 0 {\displaystyle C[-1,1]_{0}} , the set of continuous functions on [ − 1 , 1 ] {\displaystyle [-1,1]} that is zero at origin. [ 1 ]
Fekete proved that a universal Taylor series exists. [ 2 ]
Let f 1 , f 2 , . . . {\displaystyle f_{1},f_{2},...} be the sequence in which each rational-coefficient polynomials with zero constant coefficient appears countably infinitely many times (use the diagonal enumeration ). By Weierstrass approximation theorem , it is dense in C [ − 1 , 1 ] 0 {\displaystyle C[-1,1]_{0}} . Thus it suffices to approximate the sequence. We construct the power series iteratively as a sequence of polynomials p 1 , p 2 , . . . {\displaystyle p_{1},p_{2},...} , such that p n , p n + 1 {\displaystyle p_{n},p_{n+1}} agrees on the first n {\displaystyle n} coefficients, and ‖ f n − p n ‖ ∞ ≤ 1 / n {\displaystyle \|f_{n}-p_{n}\|_{\infty }\leq 1/n} .
To start, let p 1 = f 1 {\displaystyle p_{1}=f_{1}} . To construct p n + 1 {\displaystyle p_{n+1}} , replace each x {\displaystyle x} in f n + 1 − p n {\displaystyle f_{n+1}-p_{n}} by a close enough approximation with lowest degree ≥ n + 1 {\displaystyle \geq n+1} , using the lemma below. Now add this to p n {\displaystyle p_{n}} .
Lemma — The function f ( x ) = x {\displaystyle f(x)=x} can be approximated to arbitrary precision with a polynomial with arbitrarily lowest degree. That is, ∀ ϵ > 0 , n ∈ { 1 , 2 , . . . } ∃ {\displaystyle \forall \epsilon >0,n\in \{1,2,...\}\exists } polynomial p ( x ) = a n x n + ⋯ + a N x N , {\displaystyle p(x)=a_{n}x^{n}+\cdots +a_{N}x^{N},} such that ‖ f − p ‖ ∞ ≤ ϵ {\displaystyle \|f-p\|_{\infty }\leq \epsilon } .
The function g ( x ) = x − c tanh ( x / c ) {\displaystyle g(x)=x-c\tanh(x/c)} is the uniform limit of its Taylor expansion, which starts with degree 3. Also, ‖ f − g ‖ ∞ < c {\displaystyle \|f-g\|_{\infty }<c} . Thus to ϵ {\displaystyle \epsilon } -approximate f ( x ) = x {\displaystyle f(x)=x} using a polynomial with lowest degree 3, we do so for g ( x ) {\displaystyle g(x)} with c < ϵ / 2 {\displaystyle c<\epsilon /2} by truncating its Taylor expansion. Now iterate this construction by plugging in the lowest-degree-3 approximation into the Taylor expansion of g ( x ) {\displaystyle g(x)} , obtaining an approximation of lowest degree 9, 27, 81...
|
https://en.wikipedia.org/wiki/Universal_Taylor_series
|
Universal adaptive strategy theory ( UAST ) is an evolutionary theory developed by J. Philip Grime in collaboration with Simon Pierce describing the general limits to ecology and evolution based on the trade-off that organisms face when the resources they gain from the environment are allocated between either growth, maintenance or regeneration – known as the universal three-way trade-off.
A universal three-way trade-off produces adaptive strategies throughout the tree of life, with extreme strategies facilitating the survival of genes via: C (competitive), the survival of the individual using traits that maximize resource acquisition and resource control in consistently productive niches ; S (stress-tolerant), individual survival via maintenance of metabolic performance in variable and unproductive niches; or R ( ruderal ), rapid gene propagation via rapid completion of the lifecycle and regeneration in niches where events are frequently lethal to the individual.
It is impossible for an organism to evolve a survival strategy in which all resources are devoted exclusively to one of these investment paths, but relatively extreme strategies exist, with a range of intermediates. The system can be represented by a triangle, with the three extreme possibilities at its vertices. The different species may be located at some particular point inside this triangle, accommodating a certain percentage of each of the three strategies.
It is possible to use multivariate statistics to determine the main trends in phenotypic variability in a range of organisms, which for various major animal groups (most prominently vertebrates ), has been shown to have three main endpoints consistent with UAST.
UAST is a key part of the twin-filter model describing how species with similar overall strategies but divergent sets of minor traits coexist in ecological communities.
C-S-R Triangle theory is the application of UAST to plant biology . The three strategies are competitor, stress tolerator, and ruderal. These strategies each thrive best in a unique combination of either high or low intensities of stress and disturbance .
Competitors are plant species that thrive in areas of low intensity stress (moisture deficit) and disturbance and excel in biological competition . These species are able to outcompete other plants by most efficiently tapping into available resources. Competitors do this through a combination of favorable characteristics, including rapid growth rate, high productivity (growth in height, lateral spread, and root mass), and high capacity for phenotypic plasticity . This last feature allows competitors to be highly flexible in morphology and adjust the allocation of resources throughout the various parts of the plant as needed over the course of the growing season.
Stress tolerators are plant species that live in areas of high intensity stress and low intensity disturbance. Species that have adapted this strategy generally have slow growth rates, long-lived leaves, high rates of nutrient retention, and low phenotypic plasticity. Stress tolerators respond to environmental stresses through physiological variability. These species are often found in stressful environments such as alpine or arid habitats, deep shade, nutrient deficient soils, and areas of extreme pH levels.
Ruderals are plant species that prosper in situations of high intensity disturbance and low intensity stress. These species are fast-growing and rapidly complete their life cycles, and generally produce large amounts of seeds. Plants that have adapted this strategy are often found colonizing recently disturbed land, and are often annuals .
Understanding the differences between the CSR theory and its major alternative the R* theory has been a major goal in community ecology for many years. [ 1 ] [ 2 ] Unlike the R* theory that predicts that competitive ability is determined by the ability to grow under low levels of resources, the CSR theory predicts that competitive ability is determined by relative growth rate and other size related traits. While some experiments supported the R* predictions, other supported the CSR predictions. [ 1 ] The different predictions stem from different assumptions on the size asymmetry of the competition . The R* theory assumes that competition is size symmetric (i.e. resource exploitation is proportional to individual biomass), the CSR theory assumes that competition is size-asymmetric (i.e. large individuals exploit disproportional higher amounts of resources compared with smaller individuals). [ 3 ]
|
https://en.wikipedia.org/wiki/Universal_adaptive_strategy_theory
|
In the mathematical theory of artificial neural networks , universal approximation theorems are theorems [ 1 ] [ 2 ] of the following form: Given a family of neural networks, for each function f {\displaystyle f} from a certain function space , there exists a sequence of neural networks ϕ 1 , ϕ 2 , … {\displaystyle \phi _{1},\phi _{2},\dots } from the family, such that ϕ n → f {\displaystyle \phi _{n}\to f} according to some criterion. That is, the family of neural networks is dense in the function space.
The most popular version states that feedforward networks with non- polynomial activation functions are dense in the space of continuous functions between two Euclidean spaces , with respect to the compact convergence topology .
Universal approximation theorems are existence theorems: They simply state that there exists such a sequence ϕ 1 , ϕ 2 , ⋯ → f {\displaystyle \phi _{1},\phi _{2},\dots \to f} , and do not provide any way to actually find such a sequence. They also do not guarantee any method, such as backpropagation , might actually find such a sequence. Any method for searching the space of neural networks, including backpropagation, might find a converging sequence, or not (i.e. the backpropagation might get stuck in a local optimum).
Universal approximation theorems are limit theorems: They simply state that for any f {\displaystyle f} and a criterion of closeness ϵ > 0 {\displaystyle \epsilon >0} , if there are enough neurons in a neural network, then there exists a neural network with that many neurons that does approximate f {\displaystyle f} to within ϵ {\displaystyle \epsilon } . There is no guarantee that any finite size, say, 10000 neurons, is enough.
Artificial neural networks are combinations of multiple simple mathematical functions that implement more complicated functions from (typically) real-valued vectors to real-valued vectors . The spaces of multivariate functions that can be implemented by a network are determined by the structure of the network, the set of simple functions, and its multiplicative parameters. A great deal of theoretical work has gone into characterizing these function spaces.
Most universal approximation theorems are in one of two classes. The first quantifies the approximation capabilities of neural networks with an arbitrary number of artificial neurons (" arbitrary width " case) and the second focuses on the case with an arbitrary number of hidden layers, each containing a limited number of artificial neurons (" arbitrary depth " case). In addition to these two classes, there are also universal approximation theorems for neural networks with bounded number of hidden layers and a limited number of neurons in each layer (" bounded depth and bounded width " case).
The first examples were the arbitrary width case. George Cybenko in 1989 proved it for sigmoid activation functions. [ 3 ] Kurt Hornik [ de ] , Maxwell Stinchcombe, and Halbert White showed in 1989 that multilayer feed-forward networks with as few as one hidden layer are universal approximators. [ 1 ] Hornik also showed in 1991 [ 4 ] that it is not the specific choice of the activation function but rather the multilayer feed-forward architecture itself that gives neural networks the potential of being universal approximators. Moshe Leshno et al in 1993 [ 5 ] and later Allan Pinkus in 1999 [ 6 ] showed that the universal approximation property is equivalent to having a nonpolynomial activation function.
The arbitrary depth case was also studied by a number of authors such as Gustaf Gripenberg in 2003, [ 7 ] Dmitry Yarotsky, [ 8 ] Zhou Lu et al in 2017, [ 9 ] Boris Hanin and Mark Sellke in 2018 [ 10 ] who focused on neural networks with ReLU activation function. In 2020, Patrick Kidger and Terry Lyons [ 11 ] extended those results to neural networks with general activation functions such, e.g. tanh or GeLU.
One special case of arbitrary depth is that each composition component comes from a finite set of mappings. In 2024, Cai [ 12 ] constructed a finite set of mappings, named a vocabulary, such that any continuous function can be approximated by compositing a sequence from the vocabulary. This is similar to the concept of compositionality in linguistics, which is the idea that a finite vocabulary of basic elements can be combined via grammar to express an infinite range of meanings.
The bounded depth and bounded width case was first studied by Maiorov and Pinkus in 1999. [ 13 ] They showed that there exists an analytic sigmoidal activation function such that two hidden layer neural networks with bounded number of units in hidden layers are universal approximators.
In 2018, Guliyev and Ismailov [ 14 ] constructed a smooth sigmoidal activation function providing universal approximation property for two hidden layer feedforward neural networks with less units in hidden layers. In 2018, they also constructed [ 15 ] single hidden layer networks with bounded width that are still universal approximators for univariate functions. However, this does not apply for multivariable functions.
In 2022, Shen et al. [ 16 ] obtained precise quantitative information on the depth and width required to approximate a target function by deep and wide ReLU neural networks.
The question of minimal possible width for universality was first studied in 2021, Park et al obtained the minimum width required for the universal approximation of L p functions using feed-forward neural networks with ReLU as activation functions. [ 17 ] Similar results that can be directly applied to residual neural networks were also obtained in the same year by Paulo Tabuada and Bahman Gharesifard using control-theoretic arguments. [ 18 ] [ 19 ] In 2023, Cai obtained the optimal minimum width bound for the universal approximation. [ 20 ]
For the arbitrary depth case, Leonie Papon and Anastasis Kratsios derived explicit depth estimates depending on the regularity of the target function and of the activation function. [ 21 ]
The Kolmogorov–Arnold representation theorem is similar in spirit. Indeed, certain neural network families can directly apply the Kolmogorov–Arnold theorem to yield a universal approximation theorem. Robert Hecht-Nielsen showed that a three-layer neural network can approximate any continuous multivariate function. [ 22 ] This was extended to the discontinuous case by Vugar Ismailov. [ 23 ] In 2024, Ziming Liu and co-authors showed a practical application. [ 24 ]
In reservoir computing a sparse recurrent neural network with fixed weights equipped of fading memory and echo state property is followed by a trainable output layer. Its universality has been demonstrated separately for what concerns networks of rate neurons [ 25 ] and spiking neurons, respectively. [ 26 ] In 2024, the framework has been generalized and extended to quantum reservoirs where the reservoir is based on qubits defined over Hilbert spaces. [ 27 ]
Discontinuous activation functions, [ 5 ] noncompact domains, [ 11 ] [ 28 ] certifiable networks, [ 29 ] random neural networks, [ 30 ] and alternative network architectures and topologies. [ 11 ] [ 31 ]
The universal approximation property of width-bounded networks has been studied as a dual of classical universal approximation results on depth-bounded networks. For input dimension dx and output dimension dy the minimum width required for the universal approximation of the L p functions is exactly max{dx + 1, dy} (for a ReLU network). More generally this also holds if both ReLU and a threshold activation function are used. [ 17 ]
Universal function approximation on graphs (or rather on graph isomorphism classes ) by popular graph convolutional neural networks (GCNs or GNNs) can be made as discriminative as the Weisfeiler–Leman graph isomorphism test. [ 32 ] In 2020, [ 33 ] a universal approximation theorem result was established by Brüel-Gabrielsson, showing that graph representation with certain injective properties is sufficient for universal function approximation on bounded graphs and restricted universal function approximation on unbounded graphs, with an accompanying O ( | V | ⋅ | E | ) {\displaystyle {\mathcal {O}}(\left|V\right|\cdot \left|E\right|)} -runtime method that performed at state of the art on a collection of benchmarks (where V {\displaystyle V} and E {\displaystyle E} are the sets of nodes and edges of the graph respectively).
There are also a variety of results between non-Euclidean spaces [ 34 ] and other commonly used architectures and, more generally, algorithmically generated sets of functions, such as the convolutional neural network (CNN) architecture, [ 35 ] [ 36 ] radial basis functions , [ 37 ] or neural networks with specific properties. [ 38 ] [ 39 ]
A spate of papers in the 1980s—1990s, from George Cybenko and Kurt Hornik [ de ] etc, established several universal approximation theorems for arbitrary width and bounded depth. [ 40 ] [ 3 ] [ 41 ] [ 4 ] See [ 42 ] [ 43 ] [ 6 ] for reviews. The following is the most often quoted:
Universal approximation theorem — Let C ( X , R m ) {\displaystyle C(X,\mathbb {R} ^{m})} denote the set of continuous functions from a subset X {\displaystyle X} of a Euclidean R n {\displaystyle \mathbb {R} ^{n}} space to a Euclidean space R m {\displaystyle \mathbb {R} ^{m}} . Let σ ∈ C ( R , R ) {\displaystyle \sigma \in C(\mathbb {R} ,\mathbb {R} )} . Note that ( σ ∘ x ) i = σ ( x i ) {\displaystyle (\sigma \circ x)_{i}=\sigma (x_{i})} , so σ ∘ x {\displaystyle \sigma \circ x} denotes σ {\displaystyle \sigma } applied to each component of x {\displaystyle x} .
Then σ {\displaystyle \sigma } is not polynomial if and only if for every n ∈ N {\displaystyle n\in \mathbb {N} } , m ∈ N {\displaystyle m\in \mathbb {N} } , compact K ⊆ R n {\displaystyle K\subseteq \mathbb {R} ^{n}} , f ∈ C ( K , R m ) , ε > 0 {\displaystyle f\in C(K,\mathbb {R} ^{m}),\varepsilon >0} there exist k ∈ N {\displaystyle k\in \mathbb {N} } , A ∈ R k × n {\displaystyle A\in \mathbb {R} ^{k\times n}} , b ∈ R k {\displaystyle b\in \mathbb {R} ^{k}} , C ∈ R m × k {\displaystyle C\in \mathbb {R} ^{m\times k}} such that sup x ∈ K ‖ f ( x ) − g ( x ) ‖ < ε {\displaystyle \sup _{x\in K}\|f(x)-g(x)\|<\varepsilon } where g ( x ) = C ⋅ ( σ ∘ ( A ⋅ x + b ) ) {\displaystyle g(x)=C\cdot (\sigma \circ (A\cdot x+b))}
Also, certain non-continuous activation functions can be used to approximate a sigmoid function, which then allows the above theorem to apply to those functions. For example, the step function works. In particular, this shows that a perceptron network with a single infinitely wide hidden layer can approximate arbitrary functions.
Such an f {\displaystyle f} can also be approximated by a network of greater depth by using the same construction for the first layer and approximating the identity function with later layers.
It suffices to prove the case where m = 1 {\displaystyle m=1} , since uniform convergence in R m {\displaystyle \mathbb {R} ^{m}} is just uniform convergence in each coordinate.
Let F σ {\displaystyle F_{\sigma }} be the set of all one-hidden-layer neural networks constructed with σ {\displaystyle \sigma } . Let C 0 ( R d , R ) {\displaystyle C_{0}(\mathbb {R} ^{d},\mathbb {R} )} be the set of all C ( R d , R ) {\displaystyle C(\mathbb {R} ^{d},\mathbb {R} )} with compact support.
If the function is a polynomial of degree d {\displaystyle d} , then F σ {\displaystyle F_{\sigma }} is contained in the closed subspace of all polynomials of degree d {\displaystyle d} , so its closure is also contained in it, which is not all of C 0 ( R d , R ) {\displaystyle C_{0}(\mathbb {R} ^{d},\mathbb {R} )} .
Otherwise, we show that F σ {\displaystyle F_{\sigma }} 's closure is all of C 0 ( R d , R ) {\displaystyle C_{0}(\mathbb {R} ^{d},\mathbb {R} )} . Suppose we can construct arbitrarily good approximations of the ramp function r ( x ) = { − 1 if x < − 1 + x if | x | ≤ 1 + 1 if x > 1 {\displaystyle r(x)={\begin{cases}-1&{\text{if }}x<-1\\{\phantom {+}}x&{\text{if }}|x|\leq 1\\{\phantom {+}}1&{\text{if }}x>1\\\end{cases}}} then it can be combined to construct arbitrary compactly-supported continuous function to arbitrary precision. It remains to approximate the ramp function.
Any of the commonly used activation functions used in machine learning can obviously be used to approximate the ramp function, or first approximate the ReLU, then the ramp function.
if σ {\displaystyle \sigma } is "squashing", that is, it has limits σ ( − ∞ ) < σ ( + ∞ ) {\displaystyle \sigma (-\infty )<\sigma (+\infty )} , then one can first affinely scale down its x-axis so that its graph looks like a step-function with two sharp "overshoots", then make a linear sum of enough of them to make a "staircase" approximation of the ramp function. With more steps of the staircase, the overshoots smooth out and we get arbitrarily good approximation of the ramp function.
The case where σ {\displaystyle \sigma } is a generic non-polynomial function is harder, and the reader is directed to. [ 6 ]
The above proof has not specified how one might use a ramp function to approximate arbitrary functions in C 0 ( R n , R ) {\displaystyle C_{0}(\mathbb {R} ^{n},\mathbb {R} )} . A sketch of the proof is that one can first construct flat bump functions, intersect them to obtain spherical bump functions that approximate the Dirac delta function , then use those to approximate arbitrary functions in C 0 ( R n , R ) {\displaystyle C_{0}(\mathbb {R} ^{n},\mathbb {R} )} . [ 44 ] The original proofs, such as the one by Cybenko, use methods from functional analysis, including the Hahn-Banach and Riesz–Markov–Kakutani representation theorems. Cybenko first published the theorem in a technical report in 1988, [ 45 ] then as a paper in 1989. [ 3 ]
Notice also that the neural network is only required to approximate within a compact set K {\displaystyle K} . The proof does not describe how the function would be extrapolated outside of the region.
The problem with polynomials may be removed by allowing the outputs of the hidden layers to be multiplied together (the "pi-sigma networks"), yielding the generalization: [ 41 ]
Universal approximation theorem for pi-sigma networks — With any nonconstant activation function, a one-hidden-layer pi-sigma network is a universal approximator.
The "dual" versions of the theorem consider networks of bounded width and arbitrary depth. A variant of the universal approximation theorem was proved for the arbitrary depth case by Zhou Lu et al. in 2017. [ 9 ] They showed that networks of width n + 4 with ReLU activation functions can approximate any Lebesgue-integrable function on n -dimensional input space with respect to L 1 {\displaystyle L^{1}} distance if network depth is allowed to grow. It was also shown that if the width was less than or equal to n , this general expressive power to approximate any Lebesgue integrable function was lost. In the same paper [ 9 ] it was shown that ReLU networks with width n + 1 were sufficient to approximate any continuous function of n -dimensional input variables. [ 46 ] The following refinement, specifies the optimal minimum width for which such an approximation is possible and is due to. [ 47 ]
Universal approximation theorem (L1 distance, ReLU activation, arbitrary depth, minimal width) — For any Bochner–Lebesgue p-integrable function f : R n → R m {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} ^{m}} and any ε > 0 {\displaystyle \varepsilon >0} , there exists a fully connected ReLU network F {\displaystyle F} of width exactly d m = max { n + 1 , m } {\displaystyle d_{m}=\max\{n+1,m\}} , satisfying ∫ R n ‖ f ( x ) − F ( x ) ‖ p d x < ε . {\displaystyle \int _{\mathbb {R} ^{n}}\|f(x)-F(x)\|^{p}\,\mathrm {d} x<\varepsilon .} Moreover, there exists a function f ∈ L p ( R n , R m ) {\displaystyle f\in L^{p}(\mathbb {R} ^{n},\mathbb {R} ^{m})} and some ε > 0 {\displaystyle \varepsilon >0} , for which there is no fully connected ReLU network of width less than d m = max { n + 1 , m } {\displaystyle d_{m}=\max\{n+1,m\}} satisfying the above approximation bound.
Remark: If the activation is replaced by leaky-ReLU, and the input is restricted in a compact domain, then the exact minimum width is [ 20 ] d m = max { n , m , 2 } {\displaystyle d_{m}=\max\{n,m,2\}} .
Quantitative refinement: In the case where f : [ 0 , 1 ] n → R {\displaystyle f:[0,1]^{n}\rightarrow \mathbb {R} } , (i.e. m = 1 {\displaystyle m=1} ) and σ {\displaystyle \sigma } is the ReLU activation function , the exact depth and width for a ReLU network to achieve ε {\displaystyle \varepsilon } error is also known. [ 48 ] If, moreover, the target function f {\displaystyle f} is smooth, then the required number of layer and their width can be exponentially smaller. [ 49 ] Even if f {\displaystyle f} is not smooth, the curse of dimensionality can be broken if f {\displaystyle f} admits additional "compositional structure". [ 50 ] [ 51 ]
Together, the central result of [ 11 ] yields the following universal approximation theorem for networks with bounded width (see also [ 7 ] for the first result of this kind).
Universal approximation theorem (Uniform non- affine activation, arbitrary depth , constrained width). — Let X {\displaystyle {\mathcal {X}}} be a compact subset of R d {\displaystyle \mathbb {R} ^{d}} . Let σ : R → R {\displaystyle \sigma :\mathbb {R} \to \mathbb {R} } be any non- affine continuous function which is continuously differentiable at at least one point, with nonzero derivative at that point. Let N d , D : d + D + 2 σ {\displaystyle {\mathcal {N}}_{d,D:d+D+2}^{\sigma }} denote the space of feed-forward neural networks with d {\displaystyle d} input neurons, D {\displaystyle D} output neurons, and an arbitrary number of hidden layers each with d + D + 2 {\displaystyle d+D+2} neurons, such that every hidden neuron has activation function σ {\displaystyle \sigma } and every output neuron has the identity as its activation function, with input layer ϕ {\displaystyle \phi } and output layer ρ {\displaystyle \rho } . Then given any ε > 0 {\displaystyle \varepsilon >0} and any f ∈ C ( X , R D ) {\displaystyle f\in C({\mathcal {X}},\mathbb {R} ^{D})} , there exists f ^ ∈ N d , D : d + D + 2 σ {\displaystyle {\hat {f}}\in {\mathcal {N}}_{d,D:d+D+2}^{\sigma }} such that sup x ∈ X ‖ f ^ ( x ) − f ( x ) ‖ < ε . {\displaystyle \sup _{x\in {\mathcal {X}}}\left\|{\hat {f}}(x)-f(x)\right\|<\varepsilon .}
In other words, N {\displaystyle {\mathcal {N}}} is dense in C ( X ; R D ) {\displaystyle C({\mathcal {X}};\mathbb {R} ^{D})} with respect to the topology of uniform convergence .
Quantitative refinement: The number of layers and the width of each layer required to approximate f {\displaystyle f} to ε {\displaystyle \varepsilon } precision known; [ 21 ] moreover, the result hold true when X {\displaystyle {\mathcal {X}}} and R D {\displaystyle \mathbb {R} ^{D}} are replaced with any non-positively curved Riemannian manifold .
Certain necessary conditions for the bounded width, arbitrary depth case have been established, but there is still a gap between the known sufficient and necessary conditions. [ 9 ] [ 10 ] [ 52 ]
The first result on approximation capabilities of neural networks with bounded number of layers, each containing a limited number of artificial neurons was obtained by Maiorov and Pinkus. [ 13 ] Their remarkable result revealed that such networks can be universal approximators and for achieving this property two hidden layers are enough.
Universal approximation theorem: [ 13 ] — There exists an activation function σ {\displaystyle \sigma } which is analytic, strictly increasing and sigmoidal and has the following property: For any f ∈ C [ 0 , 1 ] d {\displaystyle f\in C[0,1]^{d}} and ε > 0 {\displaystyle \varepsilon >0} there exist constants d i , c i j , θ i j , γ i {\displaystyle d_{i},c_{ij},\theta _{ij},\gamma _{i}} , and vectors w i j ∈ R d {\displaystyle \mathbf {w} ^{ij}\in \mathbb {R} ^{d}} for which | f ( x ) − ∑ i = 1 6 d + 3 d i σ ( ∑ j = 1 3 d c i j σ ( w i j ⋅ x − θ i j ) − γ i ) | < ε {\displaystyle \left\vert f(\mathbf {x} )-\sum _{i=1}^{6d+3}d_{i}\sigma \left(\sum _{j=1}^{3d}c_{ij}\sigma (\mathbf {w} ^{ij}\cdot \mathbf {x-} \theta _{ij})-\gamma _{i}\right)\right\vert <\varepsilon } for all x = ( x 1 , . . . , x d ) ∈ [ 0 , 1 ] d {\displaystyle \mathbf {x} =(x_{1},...,x_{d})\in [0,1]^{d}} .
This is an existence result. It says that activation functions providing universal approximation property for bounded depth bounded width networks exist. Using certain algorithmic and computer programming techniques, Guliyev and Ismailov efficiently constructed such activation functions depending on a numerical parameter. The developed algorithm allows one to compute the activation functions at any point of the real axis instantly. For the algorithm and the corresponding computer code see. [ 14 ] The theoretical result can be formulated as follows.
Universal approximation theorem: [ 14 ] [ 15 ] — Let [ a , b ] {\displaystyle [a,b]} be a finite segment of the real line, s = b − a {\displaystyle s=b-a} and λ {\displaystyle \lambda } be any positive number. Then one can algorithmically construct a computable sigmoidal activation function σ : R → R {\displaystyle \sigma \colon \mathbb {R} \to \mathbb {R} } , which is infinitely differentiable, strictly increasing on ( − ∞ , s ) {\displaystyle (-\infty ,s)} , λ {\displaystyle \lambda } -strictly increasing on [ s , + ∞ ) {\displaystyle [s,+\infty )} , and satisfies the following properties:
Here “ σ : R → R {\displaystyle \sigma \colon \mathbb {R} \to \mathbb {R} } is λ {\displaystyle \lambda } -strictly increasing on some set X {\displaystyle X} ” means that there exists a strictly increasing function u : X → R {\displaystyle u\colon X\to \mathbb {R} } such that | σ ( x ) − u ( x ) | ≤ λ {\displaystyle |\sigma (x)-u(x)|\leq \lambda } for all x ∈ X {\displaystyle x\in X} . Clearly, a λ {\displaystyle \lambda } -increasing function behaves like a usual increasing function as λ {\displaystyle \lambda } gets small.
In the " depth-width " terminology, the above theorem says that for certain activation functions depth- 2 {\displaystyle 2} width- 2 {\displaystyle 2} networks are universal approximators for univariate functions and depth- 3 {\displaystyle 3} width- ( 2 d + 2 ) {\displaystyle (2d+2)} networks are universal approximators for d {\displaystyle d} -variable functions ( d > 1 {\displaystyle d>1} ).
|
https://en.wikipedia.org/wiki/Universal_approximation_theorem
|
In mathematical analysis , the universal chord theorem states that if a function f is continuous on [ a , b ] and satisfies f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} , then for every natural number n {\displaystyle n} , there exists some x ∈ [ a , b ] {\displaystyle x\in [a,b]} such that f ( x ) = f ( x + b − a n ) {\displaystyle f(x)=f\left(x+{\frac {b-a}{n}}\right)} . [ 1 ]
The theorem was published by Paul Lévy in 1934 as a generalization of Rolle's theorem . [ 2 ]
Let H ( f ) = { h ∈ [ 0 , + ∞ ) : f ( x ) = f ( x + h ) for some x } {\displaystyle H(f)=\{h\in [0,+\infty ):f(x)=f(x+h){\text{ for some }}x\}} denote the chord set of the function f . If f is a continuous function and h ∈ H ( f ) {\displaystyle h\in H(f)} , then h n ∈ H ( f ) {\displaystyle {\frac {h}{n}}\in H(f)} for all natural numbers n . [ 3 ]
The case when n = 2 can be considered an application of the Borsuk–Ulam theorem to the real line. It says that if f ( x ) {\displaystyle f(x)} is continuous on some
interval I = [ a , b ] {\displaystyle I=[a,b]} with the condition that f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} , then there exists some x ∈ [ a , b ] {\displaystyle x\in [a,b]} such that f ( x ) = f ( x + b − a 2 ) {\displaystyle f(x)=f\left(x+{\frac {b-a}{2}}\right)} .
In less generality, if f : [ 0 , 1 ] → R {\displaystyle f:[0,1]\rightarrow \mathbb {R} } is continuous and f ( 0 ) = f ( 1 ) {\displaystyle f(0)=f(1)} , then there exists x ∈ [ 0 , 1 2 ] {\displaystyle x\in \left[0,{\frac {1}{2}}\right]} that satisfies f ( x ) = f ( x + 1 / 2 ) {\displaystyle f(x)=f(x+1/2)} .
Consider the function g : [ a , b + a 2 ] → R {\displaystyle g:\left[a,{\dfrac {b+a}{2}}\right]\to \mathbb {R} } defined by g ( x ) = f ( x + b − a 2 ) − f ( x ) {\displaystyle g(x)=f\left(x+{\dfrac {b-a}{2}}\right)-f(x)} . Being the sum of two continuous functions, g {\displaystyle g} is continuous, g ( a ) + g ( b + a 2 ) = f ( b ) − f ( a ) = 0 {\displaystyle g(a)+g\left({\dfrac {b+a}{2}}\right)=f(b)-f(a)=0} . It follows that g ( a ) ⋅ g ( b + a 2 ) ≤ 0 {\displaystyle g(a)\cdot g\left({\dfrac {b+a}{2}}\right)\leq 0} and by applying the intermediate value theorem , there exists c ∈ [ a , b + a 2 ] {\displaystyle c\in \left[a,{\dfrac {b+a}{2}}\right]} such that g ( c ) = 0 {\displaystyle g(c)=0} , so that f ( c ) = f ( c + b − a 2 ) {\displaystyle f(c)=f\left(c+{\dfrac {b-a}{2}}\right)} . This concludes the proof of the theorem for n = 2 {\displaystyle n=2} .
The proof of the theorem in the general case is very similar to the proof for n = 2 {\displaystyle n=2} Let n {\displaystyle n} be a non negative integer, and consider the function g : [ a , b − b − a n ] → R {\displaystyle g:\left[a,b-{\dfrac {b-a}{n}}\right]\to \mathbb {R} } defined by g ( x ) = f ( x + b − a n ) − f ( x ) {\displaystyle g(x)=f\left(x+{\dfrac {b-a}{n}}\right)-f(x)} . Being the sum of two continuous functions, g {\displaystyle g} is continuous. Furthermore, ∑ k = 0 n − 1 g ( a + k ⋅ b − a n ) = 0 {\displaystyle \sum _{k=0}^{n-1}g\left(a+k\cdot {\dfrac {b-a}{n}}\right)=0} . It follows that there exists integers i , j {\displaystyle i,j} such that g ( a + i ⋅ b − a n ) ≤ 0 ≤ g ( a + j ⋅ b − a n ) {\displaystyle g\left(a+i\cdot {\dfrac {b-a}{n}}\right)\leq 0\leq g\left(a+j\cdot {\dfrac {b-a}{n}}\right)} The intermediate value theorems gives us c such that g ( c ) = 0 {\displaystyle g(c)=0} and the theorem follows.
|
https://en.wikipedia.org/wiki/Universal_chord_theorem
|
In data compression , a universal code for integers is a prefix code that maps the positive integers onto binary codewords, with the additional property that whatever the true probability distribution on integers, as long as the distribution is monotonic (i.e., p ( i ) ≥ p ( i + 1) for all positive i ), the expected lengths of the codewords are within a constant factor of the expected lengths that the optimal code for that probability distribution would have assigned. A universal code is asymptotically optimal if the ratio between actual and optimal expected lengths is bounded by a function of the information entropy of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity.
In general, most prefix codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message. Universal codes are generally not used for precisely known probability distributions, and no universal code is known to be optimal for any distribution used in practice.
A universal code should not be confused with universal source coding , in which the data compression method need not be a fixed prefix code and the ratio between actual and optimal expected lengths must approach one. However, note that an asymptotically optimal universal code can be used on independent identically-distributed sources , by using increasingly large blocks , as a method of universal source coding.
These are some universal codes for integers; an asterisk ( * ) indicates a code that can be trivially restated in lexicographical order , while a double dagger ( ‡ ) indicates a code that is asymptotically optimal:
These are non-universal ones:
Their nonuniversality can be observed by noticing that, if any of these are used to code the Gauss–Kuzmin distribution or the Zeta distribution with parameter s=2, expected codeword length is infinite. For example, using unary coding on the Zeta distribution yields an expected length of
On the other hand, using the universal Elias gamma coding for the Gauss–Kuzmin distribution results in an expected codeword length (about 3.51 bits) near entropy (about 3.43 bits) - Академия Google .
Huffman coding and arithmetic coding (when they can be used) give at least as good, and often better compression than any universal code.
However, universal codes are useful when Huffman coding cannot be used — for example, when one does not know the exact probability of each message, but only knows the rankings of their probabilities.
Universal codes are also useful when Huffman codes are inconvenient. For example, when the transmitter but not the receiver knows the probabilities of the messages, Huffman coding requires an overhead of transmitting those probabilities to the receiver. Using a universal code does not have that overhead.
Each universal code, like each other self-delimiting (prefix) binary code, has its own "implied probability distribution" given by P ( i )=2 − l ( i ) where l ( i ) is the length of the i th codeword and P ( i ) is the corresponding symbol's probability. If the actual message probabilities are Q ( i ) and Kullback–Leibler divergence D KL ( Q ‖ P ) {\displaystyle D_{\text{KL}}(Q\|P)} is minimized by the code with l ( i ) , then the optimal Huffman code for that set of messages will be equivalent to that code. Likewise, how close a code is to optimal can be measured by this divergence. Since universal codes are simpler and faster to encode and decode than Huffman codes (which is, in turn, simpler and faster than arithmetic encoding ), the universal code would be preferable in cases where D KL ( Q ‖ P ) {\displaystyle D_{\text{KL}}(Q\|P)} is sufficiently small. Lossless Data Compression Program: Hybrid LZ77 RLE
For any geometric distribution (an exponential distribution on integers), a Golomb code is optimal. With universal codes, the implicit distribution is approximately a power law such as 1 / n 2 {\displaystyle 1/n^{2}} (more precisely, a Zipf distribution ).
For the Fibonacci code , the implicit distribution is approximately 1 / n q {\displaystyle 1/n^{q}} , with
where φ {\displaystyle \varphi } is the golden ratio . For the ternary comma code (i.e., encoding in base 3, represented with 2 bits per symbol), the implicit distribution is a power law with q = 1 + log 3 ( 4 / 3 ) ≃ 1.26 {\displaystyle q=1+\log _{3}(4/3)\simeq 1.26} . These distributions thus have near-optimal codes with their respective power laws.
|
https://en.wikipedia.org/wiki/Universal_code_(data_compression)
|
Universal conductance fluctuations (UCF) in mesoscopic physics is a phenomenon encountered in electrical transport experiments in mesoscopic species. The measured electrical conductance will vary from sample to sample, mainly due to inhomogeneous scattering sites. Fluctuations originate from coherence effects for electronic wavefunctions and thus the phase-coherence length l ϕ {\displaystyle \textstyle l_{\phi }} needs be larger than the momentum relaxation length l m {\displaystyle \textstyle l_{m}} . UCF is more profound when electrical transport is in weak localization regime. l ϕ < l c {\displaystyle \textstyle l_{\phi }<l_{c}} where l c = M ⋅ l m {\displaystyle l_{c}=M\cdot l_{m}} , M {\displaystyle \textstyle M} is the number of conduction channels and l m {\displaystyle \textstyle l_{m}} is the momentum relaxation due to phonon scattering events length or mean free path. For weakly localized samples fluctuation in conductance is equal to fundamental conductance G o = 2 e 2 / h {\displaystyle \textstyle G_{o}=2e^{2}/h} regardless of the number of channels.
Many factors will influence the amplitude of UCF. At zero temperature without decoherence, the UCF is influenced by mainly two factors, the symmetry and the shape of the sample. Recently, a third key factor, anisotropy of Fermi surface, is also found to fundamentally influence the amplitude of UCF. [ 1 ]
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Universal_conductance_fluctuations
|
Universal design is the design of buildings, products or environments to make them accessible to people, regardless of age , disability , or other factors. It emerged as a rights -based, anti- discrimination measure, which seeks to create design for all abilities. Evaluating material and structures that can be utilized by all. [ 1 ] It addresses common barriers to participation by creating things that can be used by the maximum number of people possible. [ 2 ] “ When disabling mechanisms are to be replaced with mechanisms for inclusion , different kinds of knowledge are relevant for different purposes. As a practical strategy for inclusion, Universal Design involves dilemmas and often difficult priorities.” [ 1 ] Curb cuts or sidewalk ramps, which are essential for people in wheelchairs but also used by all, are a common example of universal design.
The term universal design was coined by the architect Ronald Mace to describe the concept of designing all products and the built environment to be aesthetic and usable to the greatest extent possible by everyone, regardless of their age, ability, or status in life. [ 3 ] However, due to some people having unusual or conflicting access needs, such as a person with low vision needing bright light and a person with photophobia needing dim light, universal design does not address absolutely every need for every person in every situation. [ 2 ]
Universal design emerged from slightly earlier barrier-free concepts, the broader accessibility movement, and adaptive and assistive technology and also seeks to blend aesthetics into these core considerations. As life expectancy rises and modern medicine increases the survival rate of those with significant injuries, illnesses, and birth defects, there is a growing interest in universal design. There are many industries in which universal design is having strong market penetration but there are many others in which it has not yet been adopted to any great extent. Universal design is also being applied to the design of technology, instruction, services, and other products and environments. Several different fields, such as engineering, architecture, and medicine collaborate in order to effectively create accessible environments that can lend to inclusion for a variety of disabilities. [ 4 ] It can change the socio-material relationships people have with spaces and environments and create positive experiences for all kinds of abilities. Which allows for meaningful participation across multiple demographics experiencing disability. [ 5 ]
In 1960, specifications for barrier-free design were published as a compendium of over 11 years of disability ergonomic research. In 1961, the American National Standard Institute (ANSI) A1171.1 specifications were published as the first Barrier Free Design standard. It presented criteria for designing facilities and programs for use by individuals with disabilities. The research started in 1949 at the University of Illinois Urbana-Champaign and continues to this day. The principal investigator, Dr. Timothy Nugent , who is credited in the 1961, 1971, and 1980 standards, also started the National Wheelchair Basketball Association .
The ANSI A117.1 standard was adopted by the US federal government General Services Administration under the Uniform Federal Accessibility Standards (UFAS) in 1984, then in 1990 for American with Disabilities Act (ADA) . The archived research documents are at the International Code Council (ICC) - ANSI A117.1 division. Dr. Nugent made presentations around the globe in the late 1950s and 1960s presenting the concept of independent functional participation for individuals with disabilities through program options and architectural design.
Another comprehensive publication by the Royal Institute of British Architects published three editions 1963, 1967, 1976 and 1997 of Designing for the Disabled by Selwyn Goldsmith UK. These publications contain valuable empirical data and studies of individuals with disabilities. Both standards are excellent resources for the designer and builder.
Disability ergonomics should be taught to designers, engineers, non-profits executives to further the understanding of what makes an environment wholly tenable and functional for individuals with disabilities.
In October 2003, representatives from China , Japan , and South Korea met in Beijing and agreed to set up a committee to define common design standards for a wide range of products and services that are easy to understand and use. Their goal is to publish a standard in 2004 which covers, among other areas, standards on containers and wrappings of household goods (based on a proposal from experts in Japan), and standardization of signs for public facilities, a subject which was of particular interest to China as it prepared to host the 2008 Summer Olympics .
Selwyn Goldsmith , author of Designing for the Disabled (1963), pioneered the concept of free access for people with disabilities. His most significant achievement was the creation of the dropped curb – now a standard feature of the built environment.
The term Design for All (DfA) is used to describe a design philosophy targeting the use of products, services and systems by as many people as possible without the need for adaptation. "Design for All is design for human diversity, social inclusion and equality" (EIDD Stockholm Declaration, 2004). According to the European Commission , it "encourages manufacturers and service providers to produce new technologies for everyone: technologies that are suitable for the elderly and people with disabilities , as much as the teenage techno wizard." [ 6 ] The origin of Design for All [ 7 ] lies in the field of barrier-free accessibility for people with disabilities and the broader notion of universal design.
Design for All has been highlighted in Europe by the European Commission in seeking a more user-friendly society in Europe. [ 6 ] Design for All is about ensuring that environments, products, services and interfaces work for people of all ages and abilities in different situations and under various circumstances.
Design for All has become a mainstream issue because of the aging of the population and its increasingly multi-ethnic composition. It follows a market approach and can reach out to a broader market. Easy-to-use, accessible, affordable products and services improve the quality of life of all citizens. Design for All permits access to the built environment, access to services and user-friendly products which are not just a quality factor but a necessity for many aging or disabled persons. Including Design for All early in the design process is more cost-effective than making alterations after solutions are already in the market. This is best achieved by identifying and involving users ("stakeholders") in the decision-making processes that lead to drawing up the design brief and educating public and private sector decision-makers about the benefits to be gained from making coherent use of Design (for All) in a wide range of socio-economic situations
Design for All criteria are aimed at ensuring that everyone can participate in the Information society . The European Union refers to this under the terms eInclusion and eAccessibility. A three-way approach is proposed: goods which can be accessed by nearly all potential users without modification or, failing that, products being easy to adapt according to different needs, or using standardized interfaces that can be accessed simply by using assistive technology. To this end, manufacturers and service providers, especially, but not exclusively, in the Information and Communication Technologies (ICT), produce new technologies, products, services and applications for everyone. [ 6 ]
In Europe, people have joined in networks to promote and develop Design for All:
The Center for Universal Design at North Carolina State University expounded the following principles: [ 11 ]
Each principle is broader than those of accessible design or barrier-free design contains and few brief guidelines that can be applied to design processes in any realm: physical or digital. [ 11 ]
In 2012, the Center for Inclusive Design and Environmental Access [ 12 ] at the University at Buffalo expanded the definition of the principles of universal design to include social participation and health and wellness. Rooted in evidence based design, the 8 goals of universal design were also developed. [ 13 ]
The first four goals are oriented to human performance: anthropometry , biomechanics , perception , cognition . Wellness bridges human performance and social participation. The last three goals addresses social participation outcomes. The definition and the goals are expanded upon in the textbook "Universal Design: Creating Inclusive Environments." [ 14 ]
Barrier-free ( バリアフリー , bariafurii ) building modification consists of modifying buildings or facilities so that they can be used by people who are disabled or have physical impairments. The term is used primarily in Japan and other non-English speaking countries (e.g. German: Barrierefreiheit ; Finnish: esteettömyys ), while in English-speaking countries, terms such as " accessibility " and "accessible" dominate in everyday use. An example of barrier-free design would be installing a ramp for wheelchair users alongside steps. In the late 1990s, any element which could make the use of the environment inconvenient for people with disabilities was (and still is) considered a barrier, for example, poor public street lighting. [ 15 ] In the case of new buildings, however, the idea of barrier-free modification has largely been superseded by the concept of universal design, which seeks to design things from the outset to support easy access.
Freeing a building of barriers means:
Barrier-free is also a term that applies to accessibility in situations where legal codes such as the Americans with Disabilities Act of 1990 applies. The process of adapting barrier-free public policies started when the Veterans Administration and US President's Committee on Employment of the Handicapped noticed a large amount of US citizens coming back from the Vietnam War injured and unable to navigate public spaces . [ 16 ] The ADA is a law focusing on all building aspects, products and design that is based on the concept of respecting human rights. [ 15 ] It doesn't contain design specifications directly.
An example of a country that has sought to implement barrier-free accessibility in housing estates is Singapore. Within five years, all public housing estates in the country, all 7,800 blocks of apartments, have benefited from the program. [ 17 ]
The types of Universal Design elements vary dependent on the targeted population and the space. For example, in public spaces, universal design elements are often broad areas of accessibility while in private spaces, design elements address the specific requirements of the resident. [ 16 ] Examples of these design elements are varied and leverage different approaches for different effects. Some examples include:
The following examples of Designs for All were presented in the book Diseños para todos/Designs for All published in 2008 by Optimastudio with the support of Spain's Ministry of Education, Social Affairs and Sports ( IMSERSO ) and CEAPAT: [ 19 ]
Other useful items for those with mobility limitations:
The Rehabilitation Engineering Research Center (RERC) [ 36 ] on universal design in the Built Environment funded by what is now the National Institute on Disability, Independent Living, and Rehabilitation Research completed its activities on September 29, 2021. [ 37 ] Twenty RERCs are currently funded. [ 38 ] The Center for Inclusive Design and Environmental Access at the University at Buffalo is a current recipient. [ 12 ]
One study conducted in Aswan, Egypt published in the Journal of Engineering and Applied Science aimed to explore the accessibility in three administrative buildings in the area. [ 39 ] They were looking for universal design in entrances and exits, circulation of traffic within the building, and wayfinding within the building's services. [ 39 ] They decided to focus their case study on administrative buildings in order to exemplify universal design that granted access for all citizens to all locations. [ 39 ] Among the buildings, there were some shared issues. The researchers found that vertical movement was difficult for disabled patrons, given that there were no elevators. [ 39 ] There was also no dropped curb, no Braille system, and the handles of doors were difficult to open, and there were no sensory indicators such as sounds or visual signs. [ 39 ]
This case highlights the importance if demographics when considering needs for universal design. Over 60% of the citizens who use this building on a daily basis are elderly, but there aren't accommodations that are helpful to their capabilities. [ 39 ] Along with the lack of tactile features to guide the visually impaired, the space within the building is very congested, especially for one who may not have full physical capabilities and must use a wheelchair. [ 39 ] The circulation suffers as a result, as well as the wayfinding in the structure. [ 39 ]
Although there have been attempts to create more accessible public and outdoor spaces, the restorations made have ultimately failed to meet the needs of the disabled and elderly. [ 32 ]
|
https://en.wikipedia.org/wiki/Universal_design
|
In physics and electrical engineering , the universal dielectric response , or UDR , refers to the observed emergent behaviour of the dielectric properties exhibited by diverse solid state systems. In particular this widely observed response involves power law scaling of dielectric properties with frequency under conditions of alternating current , AC. First defined in a landmark article by A. K. Jonscher in Nature published in 1977, [ 1 ] the origins of the UDR were attributed to the dominance of many-body interactions in systems, and their analogous RC network equivalence. [ 2 ]
The universal dielectric response manifests in the variation of AC Conductivity with frequency and is most often observed in complex systems consisting of multiple phases of similar or dissimilar materials. [ 3 ] Such systems, which can be called heterogenous or composite materials, can be described from a dielectric perspective as a large network consisting of resistor and capacitor elements, known also as an RC network . [ 4 ] At low and high frequencies, the dielectric response of heterogeneous materials is governed by percolation pathways. If a heterogeneous material is represented by a network in which more than 50% of the elements are capacitors, percolation through capacitor elements will occur. This percolation results in conductivity at high and low frequencies that is directly proportional to frequency. Conversely, if the fraction of capacitor elements in the representative RC network (P c ) is lower than 0.5, dielectric behavior at low and high frequency regimes is independent of frequency. At intermediate frequencies, a very broad range of heterogeneous materials show a well-defined emergent region, in which power law correlation of admittance to frequency is observed. The power law emergent region is the key feature of the UDR. In materials or systems exhibiting UDR, the overall dielectric response from high to low frequencies is symmetrical, being centered at the middle point of the emergent region, which occurs in equivalent RC networks at a frequency of : ω = ( R C ) − 1 {\displaystyle \omega =(RC)^{-1}} . In the power law emergent region, the admittance of the overall system follows the general power law proportionality Y ∝ ω α {\displaystyle Y\propto \omega ^{\alpha }} , where the power law exponent α can be approximated to the fraction of capacitors in the equivalent RC network of the system α≅P c . [ 5 ]
The power law scaling of dielectric properties with frequency is valuable in interpreting impedance spectroscopy data towards the characterisation of responses in emerging ferroelectric and multiferroic materials. [ 6 ] [ 7 ]
|
https://en.wikipedia.org/wiki/Universal_dielectric_response
|
A universal differential equation ( UDE ) is a non-trivial differential algebraic equation with the property that its solutions can approximate any continuous function on any interval of the real line to any desired level of accuracy.
Precisely, a (possibly implicit) differential equation P ( y ′ , y ″ , y ‴ , . . . , y ( n ) ) = 0 {\displaystyle P(y',y'',y''',...,y^{(n)})=0} is a UDE if for any continuous real-valued function f {\displaystyle f} and for any positive continuous function ε {\displaystyle \varepsilon } there exist a smooth solution y {\displaystyle y} of P ( y ′ , y ″ , y ‴ , . . . , y ( n ) ) = 0 {\displaystyle P(y',y'',y''',...,y^{(n)})=0} with | y ( x ) − f ( x ) | < ε ( x ) {\displaystyle |y(x)-f(x)|<\varepsilon (x)} for all x ∈ R {\displaystyle x\in \mathbb {R} } . [ 1 ]
The existence of an UDE has been initially regarded as an analogue of the universal Turing machine for analog computers, because of a result of Shannon that identifies the outputs of the general purpose analog computer with the solutions of algebraic differential equations. [ 1 ] However, in contrast to universal Turing machines, UDEs do not dictate the evolution of a system, but rather sets out certain conditions that any evolution must fulfill. [ 2 ]
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Universal_differential_equation
|
In particle physics , models with universal extra dimensions include one or more spatial dimensions beyond the three spatial and one temporal dimensions that are observed.
Models with universal extra dimensions, studied in 2001 [ 1 ] assume that all fields propagate universally in the extra dimensions; in contrast, the ADD model requires that the fields of the Standard Model be confined to a four-dimensional membrane, while only gravity propagates in the extra dimensions.
The universal extra dimensions are assumed to be compactified with radii much larger than the traditional Planck length, although smaller than in the ADD model, ~10 −18 m. [ 2 ] Generically, the—so far unobserved—Kaluza–Klein resonances of the Standard Model fields in such a theory would appear at an energy scale that is directly related to the inverse size ("compactification scale") of the extra dimension,
M KK ≈ R − 1 . {\displaystyle M_{\text{KK}}\approx R^{-1}.}
The experimental bounds (based on Large Hadron Collider data) on the compactification scale of one or two universal extra dimensions are about 1 TeV. [ 3 ] Other bounds come from electroweak precision measurements at the Z pole, the muon's magnetic moment, and limits on flavor-changing neutral currents, and reach several hundred GeV. Using universal extra dimensions to explain dark matter yields an upper limit on the compactification scale of several TeV.
|
https://en.wikipedia.org/wiki/Universal_extra_dimensions
|
A universal flu vaccine would be a flu vaccine effective against all human-adapted strains of influenza A and influenza B regardless of the virus sub type, or any antigenic drift or antigenic shift . [ 1 ] [ 2 ] [ 3 ] [ page needed ] Hence it should not require modification from year to year in order to keep up with changes in the influenza virus. As of 2024 no universal flu vaccine had been successfully developed, however several candidate vaccines were in development, with some undergoing early stage clinical trial . [ 4 ] [ 5 ]
New vaccines against currently circulating influenza variants are required every year due to the diversity of flu viruses and variable efficacy of vaccines to prevent them. A universal vaccine would eliminate the need to create a vaccine for each year's variants. The efficacy of a vaccine refers to the protection against a broad variety of influenza strains. Events such as antigenic shift have created pandemic strains such as the H1N1 outbreak in 2009 . The research required every year to isolate a potential popular viral strain and create a vaccine to defend against it is a six-month-long process; during that time the virus can mutate, making the vaccines less effective. [ 6 ]
If a universal vaccine can be developed which is both effective and safe, it could be manufactured in quantity and eliminate availability and supply issues of current vaccines. [ 7 ]
Human influenza is principally caused by the Influenza A and Influenza B viruses. Both have similar structure, being enveloped RNA virus . Their protein membrane contains the glycoproteins hemagglutinin (HA) and neuraminidase (NA) which are used by the virus to enter a host cell, and subsequently to release newly manufactured virions from the host cell. Each strain of the influenza virus has a different pattern of glycoproteins; the glycoproteins themselves have variability as well. [ 3 ] [ 8 ]
In 2008, Acambis announced work on a universal flu vaccine (ACAM-FLU-ATM) based on the less variable M2 protein component of the flu virus shell. [ 9 ] See also H5N1 vaccines .
In 2009, the Wistar Institute in Pennsylvania received a patent for using "a variety of peptides" in a flu vaccine, and announced it was seeking a corporate partner. [ 10 ]
In 2010, the National Institute of Allergy and Infectious Diseases (NIAID) of the U.S. NIH announced a breakthrough; the effort targets the stem, which mutates less often than the head of the viral HA. [ 11 ] [ 12 ]
By 2010 some universal flu vaccines had started clinical trials.
DNA vaccines , such as VGX-3400X (aimed at multiple H5N1 strains), contain DNA fragments (plasmids). [ 17 ] [ 18 ] Inovio's SynCon DNA vaccines include H5N1 and H1N1 subtypes. [ 19 ]
Other companies pursuing the vaccine as of 2009 and 2010 include Theraclone, [ 20 ] VaxInnate, [ 21 ] Crucell NV, [ 22 ] Inovio Pharmaceuticals, [ 17 ] Immune Targeting Systems (ITS) [ 23 ] and iQur. [ 24 ]
In 2019, Distributed Bio completed pre-clinical trials of a vaccine that consists of computationally selected distant evolutionary variants of hemagglutinin epitopes and is expected to begin human trials in 2021. [ 25 ]
In recent years, research has concerned use of an antigen for the flu hemagglutinin (HA) stem.
Based on the results of animal studies, a universal flu vaccine may use a two-step vaccination strategy: priming with a DNA-based HA vaccine, followed by a second dose with an inactivated, attenuated, or adenovirus -vector-based vaccine. [ 26 ]
Some people given a 2009 H1N1 flu vaccine have developed broadly protective antibodies, raising hopes for a universal flu vaccine. [ 27 ] [ 28 ] [ 29 ]
A vaccine based on the hemagglutinin (HA) stem was the first to induce "broadly neutralizing" antibodies to both HA-group 1 and HA-group 2 influenza in mice. [ 30 ]
In July 2011, researchers created an antibody , which targets a protein found on the surface of all influenza A viruses called haemagglutinin. [ 31 ] [ 32 ] [ 33 ] FI6 is the only known antibody that binds (its neutralizing activity is controversial) to all 16 subtypes of the influenza A virus hemagglutinin and might be the lynchpin for a universal influenza vaccine. [ 31 ] [ 32 ] [ 33 ] The subdomain of the hemagglutinin that is targeted by FI6, namely the stalk domain, was actually successfully used earlier as universal influenza virus vaccine by Peter Palese's research group at Mount Sinai School of Medicine. [ 34 ]
Other vaccines are polypeptide based. [ 35 ]
A study from the Albert Einstein College of Medicine , where researchers deleted gD-2 from the herpes virus, which is responsible for HSV microbes entering in and out of cells showed as of May 1, 2018 the same vaccine can be used in a modified way to contain hemagglutinin and invoke a special ADCC immune response. [ 36 ]
The Washington University School of Medicine in St. Louis and the Icahn School of Medicine in Mount Sinai in New York are using the glycoprotein neuraminidase as a targeted antigen in their research. Three monoclonal antibodies (mAB) were sampled from a patient infected with influenza A H3N2 virus . The antibodies were able to bind to the neuraminidase active site neutralizing the virus across multiple strains. The site remains the same with minimal variability across most of the flu strains. In trials using mice all three antibodies were effective across multiple strains, one antibody was able to protect the mice from all 12 strains tested including human and non-human flu viruses. All mice used in the experiments survived even if the antibody was not administered until 72 hours after the time of infection. [ 37 ]
Simultaneously the NIAID is working on a peptide vaccine that is starting human clinical trials in the 2019 flu season. The study will include 10,000 participants who will be monitored for two flu seasons. The vaccine will show efficacy if it is able to reduce the number of influenza cases in all strains. [ 38 ]
There have been some clinical trials of the M-001 [ 39 ] [ 40 ] [ 41 ] [ 42 ] [ 43 ] and H1ssF_3928 universal influenza vaccine candidates. As of August 2020, all seven M-001 trials are completed. Each one of these studies resulted in the conclusion that M-001 is safe, tolerable, and immunogenic. Their pivotal Phase III study with 12,400 participants was completed and results of the data analysis were published in October 2020, indicating that the vaccine did not show any statistical difference from the placebo group in reduction of flu illness and severity. [ 44 ] [ 45 ] [ 46 ]
In 2019–2020, a vaccine candidate from Peter Palese 's group at Mount Sinai Hospital emerged from a phase 1 clinical trial with positive results. By vaccinating twice with hemagglutinins that have different "heads" but the same membrane-proximal "stalk", the immune system is directed to focus its attention on the conserved stalk. [ 47 ] [ 48 ]
|
https://en.wikipedia.org/wiki/Universal_flu_vaccine
|
A universal gateway is a device that transacts data between two or more data sources using communication protocols specific to each. Sometimes called a universal protocol gateway, this class of product is designed as a computer appliance, and is used to connect data from one automation system to another.
Typical applications include:
M2M Communications – machine to machine communications between machines from different vendors, typically using different communication protocols. This is often a requirement to optimize the performance of a production line , by effectively communicating machine states upstream and downstream of a piece of equipment. Machine idle times can trigger lower power operation. Inventory Levels can be more effectively managed on a per station basis, by knowing the upstream and downstream demands.
M2E Communications – machine to enterprise communications is typically managed through database interactions. In this case, EATM technology is typically leveraged for data interoperability. However, many enterprise systems have real-time data interfaces. When real-time interfaces are involved, a universal gateway, with its ability to support many protocols simultaneously becomes the best choice.
In all cases, communications can fall over many different transports, RS-232 , RS-485 , Ethernet , etc. Universal Gateways have the ability to communicate between protocols and over different transports simultaneously.
Hardware platform – Industrial Computer, Embedded Computer, Computer Appliance
Communications software – Software (Drivers) to support one or more Industrial Protocols. Communications is typically polled or change based. Great care is typically taken to leverage communication protocols for the most efficient transactions of data (Optimized message sizes, communications speeds, and data update rates). Typical protocols; Rockwell Automation CIP, Ethernet/IP, Siemens Industrial Ethernet, Modbus TCP. There are hundreds of automation device protocols and Universal Gateway solutions are typically targeting certain market segments and will be based on automation vendor relationships.
Bridging software – Linking software for connecting data from one device to data in another, one being the source of data and one being the destination. Typically data is transferred on data change, on a time basis, or based on process conditions – Run, Stop, etc.
A universal gateway will typically offer all protocols on a computer appliance, for the benefit of the process engineer, giving them the opportunity to pick and choose one or more protocols, and change them over time, as the application needs demand. Protocol converters are typically designed with a single purpose, to convert protocol X to Y, and are not offering the level of configurability and flexibility of a universal gateway.
Special classes of universal gateway are addressing special needs. The Smart Grid is now prompting a new class of application where plant floor equipment is tied to electric utilities for the purpose of Demand and Response Control over power use. There are a wide variety of "Smart Grid" protocols that need to be connected to Automation Protocols via bridging software. These universal gateways typically support both wired and wireless connectivity.
|
https://en.wikipedia.org/wiki/Universal_gateway
|
A universal indicator is a pH indicator made of a solution of several compounds that exhibit various smooth colour changes over a wide range pH values to indicate the acidity or alkalinity of solutions. A universal indicator can be in paper form or present in a form of a solution. [ 1 ]
Although there are several commercially available universal pH indicators, most are a variation of a formula patented by Yamada in 1933. [ 2 ] [ 3 ] [ 4 ]
A universal indicator is usually composed of water , 1-propanol , phenolphthalein , sodium hydroxide , methyl red , bromothymol blue , sodium bisulfite , and thymol blue . [ 5 ] The colours that indicate the pH of a solution, after adding a universal indicator, are:
The colors from yellow to red indicate an acidic solution, colours blue to violet indicate an alkaline solution and a green colour indicates that a solution is neutral.
Wide-range pH test papers with distinct colours for each pH from 1 to 14 are also available. Colour matching charts are supplied with the specific test strips purchased.
The impact of an ethanol-based universal indicator may seem negligible at first glance. However, in the case of dilute solutions prepared with bidistilled water, this influence becomes readily discernible and measurable. [ 7 ]
|
https://en.wikipedia.org/wiki/Universal_indicator
|
A universal integration platform is a development - and/or configuration -time analog of a universal server. The emphasis on the term: " platform " implies a middleware environment from which integration oriented solutions are derived. Likewise, the term: "Universal" implies depth and breadth of integration capabilities that transcend disparate operating systems , protocols , APIs , data sources, programming languages , composite processes , discrete services , and monolithic applications .
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Universal_integration_platform
|
The universal parabolic constant is a mathematical constant .
It is defined as the ratio, for any parabola , of the arc length of the parabolic segment formed by the latus rectum to the focal parameter. The focal parameter is twice the focal length . The ratio is denoted P . [ 1 ] [ 2 ] [ 3 ] In the diagram, the latus rectum is pictured in blue, the parabolic segment that it forms in red and the focal parameter in green. (The focus of the parabola is the point F and the directrix is the line L .)
The value of P is [ 4 ]
(sequence A103710 in the OEIS ). The circle and parabola are unique among conic sections in that they have a universal constant. The analogous ratios for ellipses and hyperbolas depend on their eccentricities . This means that all circles are similar and all parabolas are similar, whereas ellipses and hyperbolas are not.
Take y = x 2 4 f {\textstyle y={\frac {x^{2}}{4f}}} as the equation of the parabola. The focal parameter is p = 2 f {\displaystyle p=2f} and the semilatus rectum is ℓ = 2 f {\displaystyle \ell =2f} . P := 1 p ∫ − ℓ ℓ 1 + ( y ′ ( x ) ) 2 d x = 1 2 f ∫ − 2 f 2 f 1 + x 2 4 f 2 d x = ∫ − 1 1 1 + t 2 d t ( x = 2 f t ) = arsinh ( 1 ) + 2 = ln ( 1 + 2 ) + 2 . {\displaystyle {\begin{aligned}P&:={\frac {1}{p}}\int _{-\ell }^{\ell }{\sqrt {1+\left(y'(x)\right)^{2}}}\,dx\\&={\frac {1}{2f}}\int _{-2f}^{2f}{\sqrt {1+{\frac {x^{2}}{4f^{2}}}}}\,dx\\&=\int _{-1}^{1}{\sqrt {1+t^{2}}}\,dt&(x=2ft)\\&=\operatorname {arsinh} (1)+{\sqrt {2}}\\&=\ln(1+{\sqrt {2}})+{\sqrt {2}}.\end{aligned}}}
P is a transcendental number .
Since P is transcendental, it is also irrational .
The average distance from a point randomly selected in the unit square to its center is [ 5 ]
There is also an interesting geometrical reason why this constant appears in unit squares. The average distance between a center of a unit square and a point on the square's boundary is P 4 {\displaystyle {P \over 4}} .
If we uniformly sample every point on the perimeter of the square, take line segments (drawn from the center) corresponding to each point, add them together by joining each line segment next to the other, scaling them down, the curve obtained is a parabola. [ 6 ]
|
https://en.wikipedia.org/wiki/Universal_parabolic_constant
|
Universal Powerline Bus ( UPB ) is a proprietary software protocol developed by Powerline Control Systems [ 1 ] for power-line communication between devices used for home automation . Household electrical wiring is used to send digital data between UPB devices via pulse-position modulation . [ 2 ]
Communication is peer to peer , with no central controller necessary. [ 3 ]
UPB addressing allows 250 devices per house and 250 houses per transformer, allowing over 62,500 total device addresses and can co-exist with other powerline carrier systems within the same home. [ 4 ] [ 5 ]
As of 2018 [update] , UPB enjoys one of the broadest range of device types when compared to most protocols and has support from some major manufacturers in the home automation space. Most notably, Leviton and their Omni series of home automation products, as well as the UPB devices they market. UPB is also supported by many major home automation software manufacturers. A few of which are listed below.
UPB is a highly reliable protocol [ 6 ] for home automation. It is not susceptible to RF interference, signal blockage by walls or short distance broadcast issues like some wireless protocols. UPB transmits on the building's existing wiring and has extensive noise reduction circuitry. This allows it to traverse long distances without issues, even across multiple electrical panels, making it ideal for very large homes. Appliances that have traditionally plagued X10 devices, usually do not affect UPB. In fact, UPB signals can reliably be received by the target device even with significant amounts of electrical noise on the power lines. However, in the event that an appliance in home causes extreme interference when operating, an inexpensive wire-in noise filter can be applied at the circuit breaker panel to solve the issue.
As of 2020 [update] , control of UPB devices is supported by the Home Assistant open source software (in version 0.110 and later). [ 7 ]
As of 2017 [update] , control of UPB devices is supported by the OpenHAB open source software. [ 8 ]
HomeSeer is a well known commercial home automation software package that has support for UPB.
Mobile App support (IOS and Android) is available by using the PulseWorx Gateway (PGW) plug-in module.
Voice recognition products such as Alexa , Automated Living's HAL and Google's Assistant are supported either directly or indirectly through a device or automation controller.
UPB can coexist with other powerline technologies. It can also interoperate with other automation devices that use RF (for example) through the use of a multi protocol automation controller (See Leviton Omni, HomeSeer devices). This allows for a mixed technology automation system to achieve best in class devices from many manufacturers. However, unlike most wireless protocols, UPB does not require an automation controller or hub to operate.
Since UPB is a peer to peer protocol, individual switches, scene controllers and various types of plug-in modules can be individually programmed to do multiple tasks without the need to purchase a hub or controller. Some examples of actions that can be achieved without a hub or controller would be: timed shutoff of a bathroom fan (timer plug-in module or a switch with timer feature built-in), lights turning on or off based on a photocell's sensing of sunlight (I/O plug-in module), turning on one set of lights with a single tap of the switch and turning on another set of lights or devices on a double tap of the switch (dimmer switch). Turning on/off a Hot Tub (load controller switch), multiple preset light dimming settings (scene controller switch), turn on/off a motorized device (relay switch).
Scene controllers with built in Infrared (IR) sensors are available. This allows for a single programmable remote control (universal remote) like those made by Logitech to control both lighting and television or other media devices.
The following is a list of UPB device manufacturers. This is not a comprehensive list:
|
https://en.wikipedia.org/wiki/Universal_powerline_bus
|
In set theory , a universal set is a set which contains all objects, including itself. [ 1 ] In set theory as usually formulated, it can be proven in multiple ways that a universal set does not exist. However, some non-standard variants of set theory include a universal set.
Many set theories do not allow for the existence of a universal set. There are several different arguments for its non-existence, based on different choices of axioms for set theory.
Russell's paradox concerns the impossibility of a set of sets, whose members are all sets that do not contain themselves. If such a set could exist, it could neither contain itself (because its members all do not contain themselves) nor avoid containing itself (because if it did, it should be included as one of its members). [ 2 ] This paradox prevents the existence of a universal set in set theories that include either Zermelo 's axiom of restricted comprehension , or the axiom of regularity and axiom of pairing .
In Zermelo–Fraenkel set theory , the axiom of regularity and axiom of pairing prevent any set from containing itself. For any set A {\displaystyle A} , the set { A } {\displaystyle \{A\}} (constructed using pairing) necessarily contains an element disjoint from { A } {\displaystyle \{A\}} , by regularity. Because its only element is A {\displaystyle A} , it must be the case that A {\displaystyle A} is disjoint from { A } {\displaystyle \{A\}} , and therefore that A {\displaystyle A} does not contain itself. Because a universal set would necessarily contain itself, it cannot exist under these axioms. [ 3 ]
Russell's paradox prevents the existence of a universal set in set theories that include Zermelo 's axiom of restricted comprehension .
This axiom states that, for any formula φ ( x ) {\displaystyle \varphi (x)} and any set A {\displaystyle A} , there exists a set { x ∈ A ∣ φ ( x ) } {\displaystyle \{x\in A\mid \varphi (x)\}} that contains exactly those elements x {\displaystyle x} of A {\displaystyle A} that satisfy φ {\displaystyle \varphi } . [ 2 ]
If this axiom could be applied to a universal set A {\displaystyle A} , with φ ( x ) {\displaystyle \varphi (x)} defined as the predicate x ∉ x {\displaystyle x\notin x} ,
it would state the existence of Russell's paradoxical set, giving a contradiction.
It was this contradiction that led the axiom of comprehension to be stated in its restricted form, where it asserts the existence of a subset of a given set rather than the existence of a set of all sets that satisfy a given formula. [ 2 ]
When the axiom of restricted comprehension is applied to an arbitrary set A {\displaystyle A} , with the predicate φ ( x ) ≡ x ∉ x {\displaystyle \varphi (x)\equiv x\notin x} , it produces the subset of elements of A {\displaystyle A} that do not contain themselves. It cannot be a member of A {\displaystyle A} , because if it were it would be included as a member of itself, by its definition, contradicting the fact that it cannot contain itself. In this way, it is possible to construct a witness to the non-universality of A {\displaystyle A} , even in versions of set theory that allow sets to contain themselves. This indeed holds even with predicative comprehension and over intuitionistic logic .
Another difficulty with the idea of a universal set concerns the power set of the set of all sets. Because this power set is a set of sets, it would necessarily be a subset of the set of all sets, provided that both exist. However, this conflicts with Cantor's theorem that the power set of any set (whether infinite or not) always has strictly higher cardinality than the set itself.
The difficulties associated with a universal set can be avoided either by using a variant of set theory in which the axiom of comprehension is restricted in some way, or by using a universal object that is not considered to be a set.
There are set theories known to be consistent (if the usual set theory is consistent) in which the universal set V does exist (and V ∈ V {\displaystyle V\in V} is true). In these theories, Zermelo's axiom of comprehension does not hold in general, and the axiom of comprehension of naive set theory is restricted in a different way. A set theory containing a universal set is necessarily a non-well-founded set theory .
The most widely studied set theory with a universal set is Willard Van Orman Quine 's New Foundations . Alonzo Church and Arnold Oberschelp also published work on such set theories. Church speculated that his theory might be extended in a manner consistent with Quine's, [ 4 ] but this is not possible for Oberschelp's, since in it the singleton function is provably a set, [ 5 ] which leads immediately to paradox in New Foundations. [ 6 ]
Another example is positive set theory , where the axiom of comprehension is restricted to hold only for the positive formulas (formulas that do not contain negations). Such set theories are motivated by notions of closure in topology.
The idea of a universal set seems intuitively desirable in the Zermelo–Fraenkel set theory , particularly because most versions of this theory do allow the use of quantifiers over all sets (see universal quantifier ). One way of allowing an object that behaves similarly to a universal set, without creating paradoxes, is to describe V and similar large collections as proper classes rather than as sets. Russell's paradox does not apply in these theories because the axiom of comprehension operates on sets, not on classes.
The category of sets can also be considered to be a universal object that is, again, not itself a set. It has all sets as elements, and also includes arrows for all functions from one set to another.
Again, it does not contain itself, because it is not itself a set.
|
https://en.wikipedia.org/wiki/Universal_set
|
In mathematics , a universal space is a certain metric space that contains all metric spaces whose dimension is bounded by some fixed constant. A similar definition exists in topological dynamics .
Given a class C {\displaystyle \textstyle {\mathcal {C}}} of topological spaces, U ∈ C {\displaystyle \textstyle \mathbb {U} \in {\mathcal {C}}} is universal for C {\displaystyle \textstyle {\mathcal {C}}} if each member of C {\displaystyle \textstyle {\mathcal {C}}} embeds in U {\displaystyle \textstyle \mathbb {U} } . Menger stated and proved the case d = 1 {\displaystyle \textstyle d=1} of the following theorem. The theorem in full generality was proven by Nöbeling.
Theorem: [ 1 ] The ( 2 d + 1 ) {\displaystyle \textstyle (2d+1)} -dimensional cube [ 0 , 1 ] 2 d + 1 {\displaystyle \textstyle [0,1]^{2d+1}} is universal for the class of compact metric spaces whose Lebesgue covering dimension is less than d {\displaystyle \textstyle d} .
Nöbeling went further and proved:
Theorem: The subspace of [ 0 , 1 ] 2 d + 1 {\displaystyle \textstyle [0,1]^{2d+1}} consisting of set of points, at most d {\displaystyle \textstyle d} of whose coordinates are rational, is universal for the class of separable metric spaces whose Lebesgue covering dimension is less than d {\displaystyle \textstyle d} .
The last theorem was generalized by Lipscomb to the class of metric spaces of weight α {\displaystyle \textstyle \alpha } , α > ℵ 0 {\displaystyle \textstyle \alpha >\aleph _{0}} : There exist a one-dimensional metric space J α {\displaystyle \textstyle J_{\alpha }} such that the subspace of J α 2 d + 1 {\displaystyle \textstyle J_{\alpha }^{2d+1}} consisting of set of points, at most d {\displaystyle \textstyle d} of whose coordinates are "rational" (suitably defined), is universal for the class of metric spaces whose Lebesgue covering dimension is less than d {\displaystyle \textstyle d} and whose weight is less than α {\displaystyle \textstyle \alpha } . [ 2 ]
Consider the category of topological dynamical systems ( X , T ) {\displaystyle \textstyle (X,T)} consisting of a compact metric space X {\displaystyle \textstyle X} and a homeomorphism T : X → X {\displaystyle \textstyle T:X\rightarrow X} . The topological dynamical system ( X , T ) {\displaystyle \textstyle (X,T)} is called minimal if it has no proper non-empty closed T {\displaystyle \textstyle T} -invariant subsets. It is called infinite if | X | = ∞ {\displaystyle \textstyle |X|=\infty } . A topological dynamical system ( Y , S ) {\displaystyle \textstyle (Y,S)} is called a factor of ( X , T ) {\displaystyle \textstyle (X,T)} if there exists a continuous surjective mapping φ : X → Y {\displaystyle \textstyle \varphi :X\rightarrow Y} which is equivariant , i.e. φ ( T x ) = S φ ( x ) {\displaystyle \textstyle \varphi (Tx)=S\varphi (x)} for all x ∈ X {\displaystyle \textstyle x\in X} .
Similarly to the definition above, given a class C {\displaystyle \textstyle {\mathcal {C}}} of topological dynamical systems, U ∈ C {\displaystyle \textstyle \mathbb {U} \in {\mathcal {C}}} is universal for C {\displaystyle \textstyle {\mathcal {C}}} if each member of C {\displaystyle \textstyle {\mathcal {C}}} embeds in U {\displaystyle \textstyle \mathbb {U} } through an equivariant continuous mapping. Lindenstrauss proved the following theorem:
Theorem [ 3 ] : Let d ∈ N {\displaystyle \textstyle d\in \mathbb {N} } . The compact metric topological dynamical system ( X , T ) {\displaystyle \textstyle (X,T)} where X = ( [ 0 , 1 ] d ) Z {\displaystyle \textstyle X=([0,1]^{d})^{\mathbb {Z} }} and T : X → X {\displaystyle \textstyle T:X\rightarrow X} is the shift homeomorphism ( … , x − 2 , x − 1 , x 0 , x 1 , x 2 , … ) → ( … , x − 1 , x 0 , x 1 , x 2 , x 3 , … ) {\displaystyle \textstyle (\ldots ,x_{-2},x_{-1},\mathbf {x_{0}} ,x_{1},x_{2},\ldots )\rightarrow (\ldots ,x_{-1},x_{0},\mathbf {x_{1}} ,x_{2},x_{3},\ldots )}
is universal for the class of compact metric topological dynamical systems whose mean dimension is strictly less than d 36 {\displaystyle \textstyle {\frac {d}{36}}} and which possess an infinite minimal factor.
In the same article Lindenstrauss asked what is the largest constant c {\displaystyle \textstyle c} such that a compact metric topological dynamical system whose mean dimension is strictly less than c d {\displaystyle \textstyle cd} and which possesses an infinite minimal factor embeds into ( [ 0 , 1 ] d ) Z {\displaystyle \textstyle ([0,1]^{d})^{\mathbb {Z} }} . The results above implies c ≥ 1 36 {\displaystyle \textstyle c\geq {\frac {1}{36}}} . The question was answered by Lindenstrauss and Tsukamoto [ 4 ] who showed that c ≤ 1 2 {\displaystyle \textstyle c\leq {\frac {1}{2}}} and Gutman and Tsukamoto [ 5 ] who showed that c ≥ 1 2 {\displaystyle \textstyle c\geq {\frac {1}{2}}} . Thus the answer is c = 1 2 {\displaystyle \textstyle c={\frac {1}{2}}} .
|
https://en.wikipedia.org/wiki/Universal_space
|
A universal testing machine ( UTM ), also known as a universal tester , [ 1 ] universal tensile machine , materials testing machine , materials test frame , is used to test the tensile strength (pulling) and compressive strength (pushing) , flexural strength , bending , shear , hardness , and torsion testing , providing valuable data for designing and ensuring the quality of materials . An earlier name for a tensile testing machine is a tensometer . The "universal" part of the name reflects that it can perform many standard tests application on materials, components, and structures (in other words, that it is versatile).
An electromechanical UTM utilizes an electric motor to apply a controlled force, while a hydraulic UTM uses hydraulic systems for force application. Electromechanical UTMs are favored for their precision, speed, and ease of use, making them suitable for a wide range of applications, including tensile, compression, and flexural testing.
On the other hand, hydraulic UTMs are capable of generating higher forces and are often used for testing high-strength materials such as metals and alloys, where extreme force applications are required. Both types of UTMs play critical roles in various industries including aerospace, automotive, construction, and materials science, enabling engineers and researchers to accurately assess the mechanical properties of materials for design, quality control, and research purposes.
Several variations are in use. [ 2 ] Common components include:
The set-up and usage are detailed in a test method , often published by a standards organization . This specifies the sample preparation, fixturing, gauge length (the length which is under study or observation), analysis, etc.
The specimen is placed in the machine between the grips and an extensometer if required can automatically record the change in gauge length during the test. If an extensometer is not fitted, the machine itself can record the d
e systems including any slipping of the specimen in the grips.
Once the machine is started it begins to apply an increasing load on specimen. Throughout the tests the control system and its associated software record the load and extension or compression of the specimen.
Machines range from very small table top systems to ones with over 53 MN (12 million lbf ) capacity. [ 3 ] [ 4 ]
|
https://en.wikipedia.org/wiki/Universal_testing_machine
|
Universality probability is an abstruse probability measure in computational complexity theory that concerns universal Turing machines .
A Turing machine is a basic model of computation . Some Turing machines might be specific to doing particular calculations. For example, a Turing machine might take input which comprises two numbers and then produce output which is the product of their multiplication . Another Turing machine might take input which is a list of numbers and then give output which is those numbers sorted in order.
A Turing machine which has the ability to simulate any other Turing machine is called universal - in other words, a Turing machine (TM) is said to be a universal Turing machine (or UTM) if, given any other TM, there is a some input (or "header") such that the first TM given that input "header" will forever after behave like the second TM.
An interesting mathematical and philosophical question then arises. If a universal Turing machine is given random input (for suitable definition of random ), how probable is it that it remains universal forever?
Given a prefix-free Turing machine , the universality probability of it is the probability that it remains universal even when every input of it (as a binary string ) is prefixed by a random binary string. More formally, it is the probability measure of reals (infinite binary sequences) which have the property that every initial segment of them preserves the universality of the given Turing machine. This notion was introduced by the computer scientist Chris Wallace and was first explicitly discussed in print in an article by Dowe [ 1 ] (and a subsequent article [ 2 ] ). However, relevant discussions also appear in an earlier article by Wallace and Dowe. [ 3 ]
Although the universality probability of a UTM (UTM) was originally suspected to be zero, relatively simple proofs exist that the supremum of the set of universality probabilities is equal to 1, such as a proof based on random walks [ 4 ] and a proof in Barmpalias and Dowe (2012).
Once one has one prefix-free UTM with a non-zero universality probability, it immediately follows that all prefix-free UTMs have non-zero universality probability.
Further, because the supremum of the set of universality probabilities is 1 and because the set { m / 2 n | 0 < n & 0 < m < 2 n } is dense in the interval [0, 1],
suitable constructions of UTMs
(e.g., if U is a UTM, define a
UTM U 2 by U 2 (0 s ) halts for all strings s ,
U 2 (1 s ) = U ( s ) for all s) gives that the set of universality probabilities is dense in the open interval (0, 1).
Universality probability was thoroughly studied and characterized by Barmpalias and Dowe in 2012. [ 5 ] Seen as real numbers , these probabilities were completely characterized in terms of notions in computability theory and algorithmic information theory .
It was shown that when the underlying machine is universal, these numbers are highly algorithmically random . More specifically, it is Martin-Löf random relative to the third iteration of the halting problem . In other words, they are random relative to null sets that can be defined with four quantifiers in Peano arithmetic . Vice versa, given such a highly random number [ clarification needed ] (with appropriate approximation properties) there is a Turing machine with universality probability that number.
Universality probabilities are very related to the Chaitin constant , which is the halting probability of a universal prefix-free machine. In a sense, they are complementary to the halting probabilities of universal machines relative to the third iteration of the halting problem . In particular, the universality probability can be seen as the non-halting probability of a machine with oracle the third iteration of the halting problem. Vice versa, the non-halting probability of any prefix-free machine with this highly non-computable oracle is the universality probability of some prefix-free machine.
Universality probability provides a concrete and somewhat natural example of a highly random number (in the sense of algorithmic information theory ). In the same sense, Chaitin's constant provides a concrete example of a random number (but for a much weaker notion of algorithmic randomness).
|
https://en.wikipedia.org/wiki/Universality_probability
|
The universality–diversity paradigm (UDP) is the analysis of biological materials based on the universality and diversity [ disambiguation needed ] of its fundamental structural elements and functional mechanisms. The analysis of biological systems based on this classification has been a cornerstone of modern biology .
For example, proteins constitute the elementary building blocks of a vast variety of biological materials such as cells , spider silk or bone, where they create extremely robust, multi-functional materials by self-organization of structures over many length- and time scales , from nano to macro. Some of the structural features are commonly found in many different tissues, that is, they are highly conserved . Examples of such universal building blocks include alpha-helices , beta-sheets or tropocollagen molecules. In contrast, other features are highly specific to tissue types, such as particular filament assemblies, beta-sheet nanocrystals in spider silk or tendon fascicles. This coexistence of universality and diversity is an overarching feature in biological materials and a crucial component of materiomics . It might provide guidelines for bioinspired and biomimetic material development , where this concept is translated into the use of inorganic or hybrid organic-inorganic building blocks.
|
https://en.wikipedia.org/wiki/Universality–diversity_paradigm
|
The UniverseMachine (also known as the Universe Machine ) is a project carrying out astrophysical supercomputer simulations of various models of possible universes , created by astronomer Peter Behroozi and his research team at the Steward Observatory and the University of Arizona . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] Numerous universes with different physical characteristics may be simulated in order to develop insights into the possible beginning and evolution of our universe. A major objective is to better understand the role of dark matter in the development of the universe. [ 4 ] [ 6 ] According to Behroozi, "On the computer, we can create many different universes and compare them to the actual one, and that lets us infer which rules lead to the one we see." [ 1 ]
Besides lead investigator Behroozi, research team members include astronomer Charlie Conroy of Harvard University , physicist Andrew Hearin of the Argonne National Laboratory and physicist Risa Wechsler of Stanford University . Support funding for the project is provided by NASA , the National Science Foundation and the Munich Institute for Astro- and Particle Physics. [ 1 ]
Besides using computers and related resources at the NASA Ames Research Center and the Leibniz-Rechenzentrum in Garching, Germany , the research team used the High-Performance Computing cluster at the University of Arizona . Two-thousand processors simultaneously processed the data over three weeks. In this way, the research team generated over 8 million universes, and at least 9.6 × 10 13 galaxies. [ 3 ] [ 5 ] The UniverseMachine program continuously produced millions of simulated universes, each containing 12 million galaxies, and each permitted to develop from 400 million years after the Big Bang to the present day. [ 1 ] [ 4 ]
According to team member Wechsler, "The really cool thing about this study is that we can use all the data we have about galaxy evolution — the numbers of galaxies , how many stars they have and how they form those stars — and put that together into a comprehensive picture of the last 13 billion years of the universe." [ 4 ] Wechsler further commented, "For me, the most exciting thing is that we now have a model where we can start to ask all of these questions in a framework that works […] We have a model that is inexpensive enough computationally, that we can essentially calculate an entire universe in about a second. Then we can afford to do that millions of times and explore all of the parameter space." [ 4 ]
One of the results of the study suggests that denser dark matter in the early universe does not seem to negatively impact star formation rates, as thought initially. According to the studies, galaxies of a given size were more likely to form stars for much longer, and at a high rate. [ 6 ] The researchers expect to extend the project's objectives to include how often stars expire in supernovae , how dark matter may affect the shape of galaxies [ 6 ] and eventually, by gaining better general cosmological insights, how life originated . [ 5 ]
|
https://en.wikipedia.org/wiki/UniverseMachine
|
In mathematics , and particularly in set theory , category theory , type theory , and the foundations of mathematics , a universe is a collection that contains all the entities one wishes to consider in a given situation.
In set theory, universes are often classes that contain (as elements ) all sets for which one hopes to prove a particular theorem . These classes can serve as inner models for various axiomatic systems such as ZFC or Morse–Kelley set theory . Universes are of critical importance to formalizing concepts in category theory inside set-theoretical foundations. For instance, the canonical motivating example of a category is Set , the category of all sets, which cannot be formalized in a set theory without some notion of a universe.
In type theory, a universe is a type whose elements are types.
Perhaps the simplest version is that any set can be a universe, so long as the object of study is confined to that particular set. If the object of study is formed by the real numbers , then the real line R , which is the real number set, could be the universe under consideration. Implicitly, this is the universe that Georg Cantor was using when he first developed modern naive set theory and cardinality in the 1870s and 1880s in applications to real analysis . The only sets that Cantor was originally interested in were subsets of R .
This concept of a universe is reflected in the use of Venn diagrams . In a Venn diagram, the action traditionally takes place inside a large rectangle that represents the universe U . One generally says that sets are represented by circles; but these sets can only be subsets of U . The complement of a set A is then given by that portion of the rectangle outside of A' s circle. Strictly speaking, this is the relative complement U \ A of A relative to U ; but in a context where U is the universe, it can be regarded as the absolute complement A C of A . Similarly, there is a notion of the nullary intersection , that is the intersection of zero sets (meaning no sets, not null sets ).
Without a universe, the nullary intersection would be the set of absolutely everything, which is generally regarded as impossible; but with the universe in mind, the nullary intersection can be treated as the set of everything under consideration, which is simply U . These conventions are quite useful in the algebraic approach to basic set theory, based on Boolean lattices . Except in some non-standard forms of axiomatic set theory (such as New Foundations ), the class of all sets is not a Boolean lattice (it is only a relatively complemented lattice ).
In contrast, the class of all subsets of U , called the power set of U , is a Boolean lattice. The absolute complement described above is the complement operation in the Boolean lattice; and U , as the nullary intersection, serves as the top element (or nullary meet ) in the Boolean lattice. Then De Morgan's laws , which deal with complements of meets and joins (which are unions in set theory) apply, and apply even to the nullary meet and the nullary join (which is the empty set ).
However, once subsets of a given set X (in Cantor's case, X = R ) are considered, the universe may need to be a set of subsets of X . (For example, a topology on X is a set of subsets of X .) The various sets of subsets of X will not themselves be subsets of X but will instead be subsets of P X , the power set of X . This may be continued; the object of study may next consist of such sets of subsets of X , and so on, in which case the universe will be P ( P X ). In another direction, the binary relations on X (subsets of the Cartesian product X × X ) may be considered, or functions from X to itself, requiring universes like P ( X × X ) or X X .
Thus, even if the primary interest is X , the universe may need to be considerably larger than X . Following the above ideas, one may want the superstructure over X as the universe. This can be defined by structural recursion as follows:
Then the superstructure over X , written S X , is the union of S 0 X , S 1 X , S 2 X , and so on; or
No matter what set X is the starting point, the empty set {} will belong to S 1 X . The empty set is the von Neumann ordinal [0].
Then {[0]}, the set whose only element is the empty set, will belong to S 2 X ; this is the von Neumann ordinal [1]. Similarly, {[1]} will belong to S 3 X , and thus so will {[0],[1]}, as the union of {[0]} and {[1]}; this is the von Neumann ordinal [2]. Continuing this process, every natural number is represented in the superstructure by its von Neumann ordinal. Next, if x and y belong to the superstructure, then so does {{ x },{ x , y }}, which represents the ordered pair ( x , y ). Thus the superstructure will contain the various desired Cartesian products. Then the superstructure also contains functions and relations , since these may be represented as subsets of Cartesian products. The process also gives ordered n -tuples, represented as functions whose domain is the von Neumann ordinal [ n ], and so on.
So if the starting point is just X = {}, a great deal of the sets needed for mathematics appear as elements of the superstructure over {}. But each of the elements of S {} will be a finite set . Each of the natural numbers belongs to it, but the set N of all natural numbers does not (although it is a subset of S {}). In fact, the superstructure over {} consists of all of the hereditarily finite sets . As such, it can be considered the universe of finitist mathematics . Speaking anachronistically, one could suggest that the 19th-century finitist Leopold Kronecker was working in this universe; he believed that each natural number existed but that the set N (a " completed infinity ") did not.
However, S {} is unsatisfactory for ordinary mathematicians (who are not finitists), because even though N may be available as a subset of S {}, still the power set of N is not. In particular, arbitrary sets of real numbers are not available. So it may be necessary to start the process all over again and form S ( S {}). However, to keep things simple, one can take the set N of natural numbers as given and form SN , the superstructure over N . This is often considered the universe of ordinary mathematics . The idea is that all of the mathematics that is ordinarily studied refers to elements of this universe. For example, any of the usual constructions of the real numbers (say by Dedekind cuts ) belongs to SN . Even non-standard analysis can be done in the superstructure over a non-standard model of the natural numbers.
There is a slight shift in philosophy from the previous section, where the universe was any set U of interest. There, the sets being studied were subset s of the universe; now, they are members of the universe. Thus although P ( S X ) is a Boolean lattice, what is relevant is that S X itself is not. Consequently, it is rare to apply the notions of Boolean lattices and Venn diagrams directly to the superstructure universe as they were to the power-set universes of the previous section. Instead, one can work with the individual Boolean lattices P A , where A is any relevant set belonging to S X ; then P A is a subset of S X (and in fact belongs to S X ). In Cantor's case X = R in particular, arbitrary sets of real numbers are not available, so there it may indeed be necessary to start the process all over again.
It is possible to give a precise meaning to the claim that SN is the universe of ordinary mathematics; it is a model of Zermelo set theory , the axiomatic set theory originally developed by Ernst Zermelo in 1908. Zermelo set theory was successful precisely because it was capable of axiomatising "ordinary" mathematics, fulfilling the programme begun by Cantor over 30 years earlier. But Zermelo set theory proved insufficient for the further development of axiomatic set theory and other work in the foundations of mathematics , especially model theory .
For a dramatic example, the description of the superstructure process above cannot itself be carried out in Zermelo set theory. The final step, forming S as an infinitary union, requires the axiom of replacement , which was added to Zermelo set theory in 1922 to form Zermelo–Fraenkel set theory , the set of axioms most widely accepted today. So while ordinary mathematics may be done in SN , discussion of SN goes beyond the "ordinary", into metamathematics .
But if high-powered set theory is brought in, the superstructure process above reveals itself to be merely the beginning of a transfinite recursion .
Going back to X = {}, the empty set, and introducing the (standard) notation V i for S i {}, V 0 = {}, V 1 = P {}, and so on as before. But what used to be called "superstructure" is now just the next item on the list: V ω , where ω is the first infinite ordinal number . This can be extended to arbitrary ordinal numbers :
defines V i for any ordinal number i .
The union of all of the V i is the von Neumann universe V :
Every individual V i is a set, but their union V is a proper class . The axiom of foundation , which was added to ZF set theory at around the same time as the axiom of replacement, says that every set belongs to V .
In an interpretation of first-order logic , the universe (or domain of discourse) is the set of individuals (individual constants) over which the quantifiers range. A proposition such as ∀ x ( x 2 ≠ 2) is ambiguous, if no domain of discourse has been identified. In one interpretation, the domain of discourse could be the set of real numbers ; in another interpretation, it could be the set of natural numbers . If the domain of discourse is the set of real numbers, the proposition is false, with x = √ 2 as counterexample; if the domain is the set of naturals, the proposition is true, since 2 is not the square of any natural number.
There is another approach to universes which is historically connected with category theory . This is the idea of a Grothendieck universe . Roughly speaking, a Grothendieck universe is a set inside which all the usual operations of set theory can be performed. This version of a universe is defined to be any set for which the following axioms hold: [ 1 ]
The most common use of a Grothendieck universe U is to take U as a replacement for the category of all sets. One says that a set S is U - small if S ∈ U , and U - large otherwise. The category U - Set of all U -small sets has as objects all U -small sets and as morphisms all functions between these sets. Both the object set and the morphism set are sets, so it becomes possible to discuss the category of "all" sets without invoking proper classes. Then it becomes possible to define other categories in terms of this new category. For example, the category of all U -small categories is the category of all categories whose object set and whose morphism set are in U . Then the usual arguments of set theory are applicable to the category of all categories, and one does not have to worry about accidentally talking about proper classes. Because Grothendieck universes are extremely large, this suffices in almost all applications.
Often when working with Grothendieck universes, mathematicians assume the Axiom of Universes : "For any set x , there exists a universe U such that x ∈ U ." The point of this axiom is that any set one encounters is then U -small for some U , so any argument done in a general Grothendieck universe can be applied. [ 2 ] This axiom is closely related to the existence of strongly inaccessible cardinals .
In some type theories, especially in systems with dependent types , types themselves can be regarded as terms . There is a type called the universe (often denoted U {\displaystyle {\mathcal {U}}} ) which has types as its elements. To avoid paradoxes such as Girard's paradox (an analogue of Russell's paradox for type theory), type theories are often equipped with a countably infinite hierarchy of such universes, with each universe being a term of the next one.
There are at least two kinds of universes that one can consider in type theory: Russell-style universes (named after Bertrand Russell ) and Tarski-style universes (named after Alfred Tarski ). [ 3 ] [ 4 ] [ 5 ] A Russell-style universe is a type whose terms are types. [ 3 ] A Tarski-style universe is a type together with an interpretation operation allowing us to regard its terms as types. [ 3 ]
For example: [ 6 ]
The openendedness of Martin-Löf type theory is particularly manifest in the introduction of so-called universes. Type universes encapsulate the informal notion of reflection whose role may be explained as follows. During the course of developing a particular formalization of type theory, the type theorist may look back over the rules for types, say C, which have been introduced hitherto and perform the step of recognizing that they are valid according to Martin-Löf ’s informal semantics of meaning explanation. This act of ‘introspection’ is an attempt to become aware of the conceptions which have governed our constructions in the past. It gives rise to a “ reflection principle which roughly speaking says whatever we are used to doing with types can be done inside a universe” (Martin-Löf 1975, 83). On the formal level, this leads to an extension of the existing formalization of type theory in that the type forming capacities of C become enshrined in a type universe U C mirroring C.
|
https://en.wikipedia.org/wiki/Universe_(mathematics)
|
Universe Awareness or ( UNAWE ) [ 1 ] is an international programme that aim to expose very young children in under-privileged environments to astronomy .
In 2004, Leiden University professor George K. Miley first began exploring the idea of setting up an astronomy programme to educate and inspire young children, especially those from underprivileged backgrounds. He had been awarded an Academy Professorship by the Royal Netherlands Academy of Arts and Sciences and decided to use part of the associated funding to explore the feasibility of setting up such a programme. With considerable support and encouragement from Claus Madsen at ESO, a successful workshop was held in Germany and it was agreed that the programme was worth pursuing. Universe Awareness (UNAWE) was born.
Shortly afterwards, Carolina Ödman was appointed as the first UNAWE International Project Manager. In 2006, thanks to a grant provided by the Netherlands Minister of Education Culture and Science, Ms. van der Hoeven, the UNAWE International Office was founded at Leiden Observatory, the Netherlands. With the help of Sarah Levin as Media Coordinator, Ödman built UNAWE into a thriving global project, with a network of about 400 experts from 40 countries.
UNAWE became a Cornerstone project of the successful UN-ratified IAU/UNESCO International Year of Astronomy in 2009 (IYA2009). During IYA2009, thousands of UNAWE activities were organised in more than 45 countries. For example, in Venezuela, 43 teacher training sessions reached more than 1500 teachers and well over 60 000 children.
2010 saw many changes for UNAWE. Firstly, Ödman left her coordinating role with UNAWE to join the African Institute for Mathematical Sciences Next Einstein Initiative, handing over the reins to the former Global Coordinator for IYA2009, Pedro Russo. Later that year, the European Union awarded a grant of 1.9 million euros to fund a 3-year project called European Universe Awareness (EU-UNAWE), which builds on the work of Universe Awareness (UNAWE). With this grant, EU-UNAWE is now being further developed in six selected countries: the Netherlands, Germany, Spain, Italy, the United Kingdom and South Africa.
EU-UNAWE is endorsed by the International Astronomical Union (IAU) and it is now an integral part of the IAU Strategic Plan 2010–2020, which is called Astronomy for the Developing World. This is an ambitious blueprint that aims to use astronomy to foster education and provide skills and competencies in science and technology throughout the world, particularly in developing countries..
|
https://en.wikipedia.org/wiki/Universe_Awareness
|
The University Voting Systems Competition , or VoComp is an annual competition in which teams of students design, implement, and demonstrate open-source election systems. [ 1 ] The systems are presented to a panel of security expert judges. The winners are awarded a cash prize provided by the sponsors. [ 2 ] The competition was started by a group of students and professors from UMBC and George Washington University to inspire better ideas for electronic voting technology and raise student awareness of the political process. [ 3 ]
The first competition took place on July 16–19 during the 2006/2007 academic year in Portland, Oregon . The event was sponsored by The National Science Foundation , Election Systems & Software , and Hewlett-Packard Company . The four teams that competed were:
The judging panel included MIT professor Ron Rivest , Microsoft security researcher Josh Benaloh and John Kelsey of NIST .
The Punchscan team was awarded the "Best-Election System" grand prize and $10,000 from ES&S after uncovering a security flaw in the random number generator in the source code of the runner-up team, Prêt à Voter . [ 5 ]
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/University_Voting_Systems_Competition
|
The Center for Supercomputing Research and Development (CSRD) at the University of Illinois (UIUC) was a research center funded from 1984 to 1993. It built the shared memory Cedar computer system, which included four hardware multiprocessor clusters, as well as parallel system and applications software. It was distinguished from the four earlier UIUC Illiac systems by starting with commercial shared memory subsystems that were based on an earlier paper published by the CSRD founders. Thus CSRD was able to avoid many of the hardware design issues that slowed the Illiac series work. Over its 9 years of major funding, plus follow-on work by many of its participants, CSRD pioneered many of the shared memory architectural and software technologies upon which all 21st century computation is based.
UIUC began computer research in the 1950s, initially for civil engineering problems, and eventually succeeded by cooperative activities among the Math, Physics, and Electrical Engineering Departments to build the Illiac computer series. This led to founding the Computer Science Department in 1965.
By the early 1980s, a time of world-wide HPC expansion arrived, including the race with the Japanese 5th generation system targeting innovative parallel applications in AI. HPC/supercomputing had emerged as a field, commercial supercomputers were in use by industry and labs (but little by academia), and academic architecture and compiler research were expanding. This led to formation of the Lax committee. [ 1 ] to study the academic needs of focused HPC research, and to provide commercial HPC systems for university research. When HPC practitioner Ken Wilson won the Nobel physics prize in 1982, he expanded his already strong advocacy of both, and soon several government agencies introduced HPC R&D programs.
As a result, the UIUC Center for Supercomputing R&D (CSRD) was formed in 1984 (with funding from DOE , NSF , and UIUC, as well as DoD Darpa and AFOSR ), under the leadership of three CS professors who had worked together since the Illiac 4 project – David Kuck (Director), Duncan Lawrie (Assoc. Dir. for SW) and Ahmed Sameh (Assoc. Dir for applications), plus Ed Davidson (Assoc. Dir. for hardware/ architecture) who joined from ECE. Many graduate students and post-docs were already contributing to constituent efforts; full time academic professionals were hired, and other faculty cooperated. A total of up to 125 people were involved at the peak, over the nine years of full CSRD operation [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
The UIUC administration responded to the computing and scientific times. CSRD was set up as a Graduate College unit, with space in Talbot Lab. UIUC President Stanley Ikenberry arranged to have Governor James Thompson directly endow CSRD with $1 million per year to guarantee personnel continuity. CSRD management helped write proposals that led to a gift from Arnold Beckman of a $50 million building, the establishment of NCSA, and a new CSRD building (now CSL).
The CSRD plan for success took a major departure from earlier Illiac machines by integrating four commercially built parallel machines using an innovative interconnection network and global shared memory. Cedar was based on designing and building a limited amount of innovative hardware, driven by SW that was built on top of emerging parallel applications and compiler technology. By breaking the tradition of building hardware first and then dealing with SW details later, this codesign approach led to the name Cedar instead of Illiac 5.
Earlier work by the CSRD founders had intensively studied a variety of new high-radix interconnection networks, [ 12 ] [ 13 ] built tools to measure the parallelism in sequential programs, designed and built a restructuring compiler (Parafrase) to transform sequential programs into parallel forms, as well as inventing parallel numerical algorithms. During the Parafrase development of the 1970s, several papers were published proposing ideas for expressing and automatically optimizing parallelism. [ 14 ] [ 15 ] [ 16 ] [ 17 ] These ideas influenced later compiler work at IBM, Rice U. and elsewhere. Parafrase had been donated to Fran Allen 's IBM PTRAN group in the late 1970s, Ken Kennedy had gone there on sabbatical and obtained a Parafrase copy, and Ron Cytron joined the IBM group from UIUC. Also, KAI was founded in 1979, by three Parafrase veterans who wrote KAP, a new source-source restructurer, (Kuck, Bruce Leasure, and Mike Wolfe).
The key Cedar idea was to exploit feasible-scale parallelism, by linking together a number of shared memory nodes through an interconnection network and memory hierarchy. Alliant Computers, Inc. Alliant Computer Systems had obtained venture capital funding (in Boston), based on an earlier architecture paper by the CSRD team [ 18 ] [ 3 ] and was then shipping systems. The Cedar team was thus immediately able to focus on designing hardware to link 4 Alliant systems and add a global shared memory to the Alliant 8-processor shared memory nodes. In distinction to this, other academic teams of the era pursued massively parallel systems (CalTech, later in cooperation with Intel), fetch-and-add combining networks (NYU), innovative caching (Stanford), dataflow systems (MIT), etc.
In sharp contrast, two decades earlier, the Illiac 4 team required years of work with state of the art industry hardware technology leaders to get the system designed and built. The 1966 industrial hardware proposals for Illiac 4 hardware technology even included a GE Josephson Junction proposal which John Bardeen helped evaluate while he was developing the theory that led to his superconductivity Nobel prize. After contracting with Burroughs Corp to build and integrate an all-transistor hardware system, lengthy discussions ensued about the semiconductor memory design (and schedule slips) with subcontractor Texas Instruments ' Jack Kilby (IC inventor and later Nobelist), Morris Chang (later TSMC founder) and others. Earlier Illiac teams had pushed contemporary technologies, with similar implementation problems and delays.
Many attempts at parallel computing startups arose in the decades following Illiac 4, but nothing achieved success until adequate languages and software were developed in the 1970s and 80s. Parafrase veteran Steve Chen joined Cray and led development of the parallel/vector Cray-XMP, released in 1982. The 1990s were a turning point with many 1980s startups failing, the end of bipolar technology cost-effectiveness, and the general end of academic computer building. By the 2000s, with Intel and others manufacturing massive numbers of systems, shared memory parallelism had become ubiquitous.
CSRD and the Cedar system played key roles in advancing shared memory system effectiveness. Many CSRD innovations of the late 80s (Cedar and beyond) are in common use today, including hierarchical shared memory hardware. Cedar also had parallel Fortran extensions, a vectorizing and parallelizing compiler, and custom Linux-based OS, that were used to develop advanced parallel algorithms and applications. These will be detailed below.
One unusually productive aspect of the Cedar design effort was the ongoing cooperation among the R&D efforts of architects, compiler writers, and application developers. Another was the substantial legacy of ideas and people from the Parafrase project in the 1970s. [ 19 ] These enabled the team to focus on several design topics quickly:
The architecture group had a decade of parallel interconnect and memory experience [ 13 ] and high-radix shuffle network chosen, so after selecting Alliant as the node manufacturer, custom interfacing hardware was designed in conjunction with Alliant engineers. The compiler team started by designing Cedar Fortran for this architecture, and by modifying the Kuck & Assoc. (KAI) source-to-source translator with Cedar-specific transformations for the Alliant compiler. Having nearly two decades of parallel algorithm experience (starting from Illiac 4), the applications group chose several applications to study, based on emerging parallel algorithms. This was later extended to include some widely used applications that shared the need for the chosen algorithms [ 20 ] . Designing, building and integrating the system was then a multi-year effort, including architecture, hardware, compiler, OS and algorithm work.
The hardware design led to 3 different types of 24” printed circuit boards, with the network board using CSRD-designed crossbar gate array chips. The boards were assembled into three custom racks in a machine room in Talbot Lab using water-cooled heat exchangers. Cedar’s key architectural innovations and features included:
By 1984, Fortran was still the standard language of HPC programming, but no standard existed for parallel programming. Building on the ideas of Parafrase and emerging commercial programming methods, Cedar Fortran [ 23 ] was designed and implemented for programming Cedar and to serve as the target of the Cedar autoparallelizer.
Cedar Fortran contained a two-level parallel loop hierarchy that reflected the Cedar architecture. Each iteration of outer parallel loops made use of one cluster and a second level parallel loop made use of one of the eight processors of a cluster for each of its iterations. Cedar Fortran also contained primitives for doacross synchronization and control of critical sections. Outer-level parallel loops were initiated, scheduled and synchronized using a runtime library while inner loops relied on Alliant hardware instructions to initiate the loops, schedule and synchronize their iterations.
Global variables and arrays were allocated in global memory while those declared local to iterations of outer parallel loops were allocated within clusters. There were no caches between clusters and main memory and therefore, programmers had to explicitly copy from global memory to local memory to attain faster memory accesses. These mechanisms worked well in all cases tested and gave programmers control over processor assignment and memory allocation. As discussed in the next section, numerous applications were implemented in Cedar Fortran.
Cedar compiler work started with the development of a Fortran parallelizer for Cedar built by extending KAP, a vectorizer, which was contributed by KAI to CSRD. Because it was built on a vectorizer the first modified version of KAP developed at CSRD lacked some important capabilities necessary for an effective translation for multiprocessors, such as array privatization and parallelization of outer loops. Unlike Parafrase (written in PL/1), which ran only on IBM machines, KAP (written in C) ran on many machines (KAI customer base). To identify the missing capabilities and develop the necessary translation algorithms, a collection of Fortran programs from the Perfect Benchmarks [ 24 ] was parallelized by hand. [ 25 ] Only techniques that were considered implementable were used in the manual parallelization study. The techniques were later used for a second generation parallelizer that proved effective on collections of programs not used in the manual parallelization study [ 26 ] .
Meanwhile the algorithms/applications group was able to use Cedar Fortran to implement and test algorithms and run them on the four quadrants independently before system integration. The group was focused on developing a library of parallel algorithms and their associated kernels that mainly govern the performance of large-scale computational science and engineering (CSE) applications. Some of the CSE applications that were considered during the Cedar project included: electronic circuit and device simulation, structural mechanics and dynamics, computational fluid dynamics, and the adjustment of very large geodetic networks.
A systematic plan for performance evaluation of many CSE applications on the Cedar platform was outlined in [ 20 ] and. [ 27 ] In almost all of the above-mentioned CSE applications, dense and sparse matrix computations proved to largely govern the overall performance of these applications on the Cedar architecture. Parallel algorithms that realize high performance on the Cedar architecture were developed for:
In preparing to evaluate candidate hardware building blocks and the final Cedar system, CSRD managers began to assemble a collection of test algorithms; this was described in [ 20 ] and later evolved into the Perfect Club. [ 24 ] Before that, there were only kernels and focused algorithm approaches (Linpack, NAS benchmarks). In the following decade the idea became popular, especially as many manufacturers introduced high performance workstations, which buyers wanted to compare; SPEC became the workhorse of the field and was followed by many others. SPEC was incorporated in 1988 and released its first benchmark in 1992 (Spec92) and a high performance benchmark in 1994. (David Kuck and George Cybenko were early advisors, Kuck served on the BoD in the early 90s, and Rudolf Eigenmann drove the Spec HPG effort, leading to the release of a first high performance benchmark in 1996.)
In a joint effort between the CSRD groups, the Parafrase memory hierarchy loop blocking work of Abu Sufah [ 55 ] was exploited for the Cedar cache hierarchy. Several papers were published demonstrating performance enhancement for basic linear algebra algorithms on the Alliant quadrants and Cedar. A sabbatical spent at CSRD at the time by Jack Dongarra and Danny Sorensen led this work to be transferred as the BLAS 3 (to extend the simpler BLAS 1 and BLAS 2), a standard that is now widely used.
CSRD had many alumni who went on to important careers in computing. Some left early, others came late, etc. Among the leaders were UIUC faculty member Dan Gajski, who was affiliated with the CSRD directors in formulating plans and proposals, but left UIUC just before CSRD actually commenced. Another was Mike Farmwald who joined as an Associate Director for hardware/architecture when Ed Davidson left. Immediately after leaving Mike was a co-founder of Rambus, which continues as a memory design leader. David Padua became Assoc. Director for SW after Duncan Lawrie left, and continued many CSRD projects as a UIUC CS professor. Over time, CSRD researchers became CS and ECE department heads at 5 Big Ten universities.
By 1990, the Cedar system had been completed. The CSRD team was able to scale applications from single clusters to the full 4-cluster system and begin performance measurements. Despite these innovation successes, there was no follow up machine construction project. After the end of the Cedar project, the Stanford DASH/FLASH projects, and the MIT Alewife project around 1995, the era of large, multi-faculty academic machine designs had come to an end. Cedar was a preeminent part of the last wave of such projects. ISCA’s 25th Anniversary Proceedings [ 56 ] contain several retrospective papers describing some of the machines in that last wave, including one on Cedar. [ 57 ]
About 50 remaining CSRD students, academic professionals and faculty became a research group within the Coordinated Science Laboratory by 1994. For several years, they continued the work initiated in the 1980s, including experimental evaluations of Cedar [ 58 ] [ 59 ] and continuation of several lines of CSRD compiler research [ 60 ] . [ 26 ]
Beyond the core CSRD work of designing, building and using Cedar, many related topics arose. Some were directly motivated by the Cedar project. Many of these had value well beyond Cedar, were pursued well-beyond the official end of CSRD, and were taken up by many academic and industrial groups. Next, the most important such topics are discussed.
In the mid 1980s, C. Polychronopoulos developed one of the most influential strategies for the scheduling of parallel loop iterations . The strategy, called Guided Self-Scheduling, [ 61 ] schedules the execution of a group of loop iterations each time a processor becomes available. The number of iterations in these groups decreases as the execution of the loop progresses in such a way that the load imbalance is reduced relative to the static or dynamic scheduling techniques used at the time. Guided Self-Scheduling influenced research and practice with numerous citations of the paper introducing the technique and the adoption of the strategy by OpenMP as one of its standard loop scheduling techniques.
In the mid to late 1980’s, the so-called “Parallel Distributed Processing” (PDP) effort [ 62 ] recast earlier generations of neural computation by demonstrating effective machine learning algorithms and neural architectures. The computing paradigm, far removed from traditional von Neumann computer architecture, demonstrated that PDP approaches and algorithms could address a variety of application problems in novel ways. However, it was not known what kinds of problems could be solved using such massively parallel neural network architectures. In 1989, CSRD researcher George Cybenko, demonstrated that even the simplest nontrivial neural network had the representational power to approximate a wide variety of functions, including categorical classifiers and continuous real-valued functions. [ 63 ] That work was seminal in that it showed that, in principle, neural machines based on biological nervous systems could effectively emulate any input-output relationship that was computable by traditional machines. As a result, Cybenko’s result has been often called the “Universal Approximation Theorem” in the literature. The proof of that result relied on advanced functional analysis techniques and was not constructive. Even so, it gave rigorous justification for generations of neural network architectures, including deep learning [ 64 ] and large language models [ 65 ] in wide use in the 2020’s.
While Cybenko’s Universal Approximation Theorem addressed the capabilities of neural-based computing machines, it was silent on the ability of such architectures to effectively learn their parameter values from data. Cybenko and CSRD colleagues, Sirpa Saarinen and Randall Bramley, subsequently studied the numerical properties of neural networks which are typically trained using stochastic gradient descent and its variants. They observed that neurons saturate when network parameters are very negative or very positive leading to arbitrarily small gradients which turn result in optimization problems that are numerically poorly conditioned. [ 66 ] This property has been called the “vanishing gradient” problem in machine learning. [ 67 ]
The Basic Linear Algebra Subroutines (BLAS) are among the most important mathematical software achievements. They are essential components of LINPACK and versions are used by every major vendor of computer hardware. The BLAS library was developed in three different phases. BLAS 1 provided optimized implementations for basic vector operations; BLAS 2 contributed matrix-vector capabilities to the library. Blas 3 involves optimizations for matrix-matrix operations. The multi-cluster shared memory architecture of Cedar inspired a great deal of library optimization research involving cache locality and data reuse for matrix operations of this type. The official BLAS 3 standard was published in 1990 as. [ 68 ] This was inspired, in part, on. [ 34 ] Additional CSRD research data management for complex memory management followed and some of the more theoretical work was published as [ 69 ] and. [ 70 ] The performance impact of these algorithms when running on Cedar is reported in [ 71 ] .
Beyond CSRD, the many parallel startup companies of the 1980s created a profusion of ad hoc parallel programming styles, based on various process and thread models. Subsequently, many parallel language and compiler ideas were proposed, including compilers for Cray Fortran, KAI-based source-to-source optimizers, etc. Some of these tried to create product differentiation advantages, but largely went contrary to user desires for performance portability. By the late 1980s, KAI started a standardization effort that led to the ANSI X3H5 draft standard, [ 72 ] which was widely adopted.
In the 1990s, after CSRD, these ideas influenced KAI in auto-parallelization, and soon
another round of standardization was begun. By 1996 KAI had SGI as a customer and they joined the effort to form the OpenMP consortium – the OpenMP Architecture Review Board incorporated in 1997 with a growing collection of manufacturers. KAI also developed parallel performance and thread checking tools, which Intel bought with its purchase of KAI in 2000. Many KAI staff members remain, and the Intel development continues, directly inherited from Parafrase and CSRD. Today, OMP is the industry standard shared memory programming API for C/C++ and Fortran.
For his PhD thesis, Rauchwerger introduced [ 73 ] an important paradigm shift in the analysis of program loops for parallelization. Instead of first validating the transformation into parallel form through a priori analysis either statically by the compiler or dynamically at runtime, the new paradigm speculatively parallelized the loop and then checked its validity. This technique, named “ speculative parallelization ", executes a loop in parallel and tests subsequently if any data dependences could have occurred. If this validation test fails, then the loop is re-executed in a safe manner, starting from a safe state, e.g., sequentially from a previous checkpoint. This approach, known as the LRPD Test (Lazy Reduction and Privatization Doall Test). Briefly, the LRPD test instruments the shared memory references of the loop in some “shadow" structures and then, after loop execution, analyzes them for dependent patterns. This pioneering contribution has been quite influential and has been applied throughout the years by many researchers from CSRD or elsewhere.
In 1987, Allen pioneered the use of memory traces for the detection of race conditions in parallel programs. [ 74 ] Race conditions are defects of parallel programs that manifest in different outcomes for different exertions of the same program and the same input data. Because of their dynamic nature, race detections are difficult to detect and the techniques introduced by Allen and expanded in [ 75 ] are the best strategy known to cope with this problem. The strategy has been highly influential with numerous researchers working on the topic during the last decades. The technique has been incorporated into numerous experimental and commercial tools, including Intels' Inspector.
One of CSRD’s thrusts was to develop metrics able to evaluate both hardware and software systems using real applications. To this end, the Perfect Benchmarks [ 24 ] provided a set of computational applications, collected from various science domains, which were used to evaluate and drive the study of the Cedar system and its compilers. In 1994, members of CSRD and the Standard Performance Evaluation Corporation (SPEC) expanded on this thrust, forming the SPEC High-Performance Group. This group released a first real-application SPEC benchmark suite, SPEC HPC 96. SPEC has been continuing the development of benchmarks for high-performance computing to this date, a recent suite being SPEChpc 2021. With CSRD’s influence, the SPEC High-Performance Group also prompted a close collaboration of industrial and academic participants. A joint workshop in 2001 on Real-Application Benchmarking [ 76 ] founded a workshop series, eventually leading to the formation of the SPEC Research Group , which in turn co-initiated the now annual ACM/SPEC International Conference on Performance Engineering.
Funded by Darpa, the HPC++ project [ 77 ] [ 78 ] was led by Dennis Gannon and Allen Malony and Postdocs Francois Bodin from William Jalby’s group in Rennes and Peter Beckman now at Argonne National Lab. This work led from a collaboration between Malony, Gannon and Jalby that began at CSRD. HPC++ is based extensions to C++ standard template library to support a number parallel programming scenarios including single-program-multiple-data (SPMD) and Bulk Synchronous Parallel on both shared memory and distributed memory parallel systems. The most significant outcome of this collaboration was the development of the TAU Parallel Performance System. Originally developed for HPC++, it has become a standard for measuring, visualization and optimizing parallel programs for nearly all programming languages and is available for all parallel computing platforms. It supports various programming interfaces such as OpenCL, DPC++/SYCL, OpenACC, and OpenMP. It can also gather performance information of GPU computations from different vendors such as Intel and NVIDIA. TAU has been used for many HPC applications and projects.
The Cedar project has strongly influenced the research activities of many of CSRD’s faculty members long after the end of the project. After the termination of the Cedar project, the first task undertaken by three members of Cedar’s Algorithm and Application group (A. Sameh, E. Gallopoulos, and B. Philippe) was documenting the parallel algorithms developed, and published in a variety of journals and conference proceedings, during the lifetime of the project. The result was a graduate textbook: “Parallelism in Matrix Computations” by E. Gallopoulos, B. Philippe, and A. Sameh, published by Springer, 2016. [ 79 ] The parallel algorithm development experience gained by one of the members of the Cedar project (A. Sameh) proved to be of great value in his research activities after leaving UIUC. He used many of these parallel algorithms in joint research projects:
• fluid-particle interaction with the late Daniel Joseph (a National Academy of Science faculty member in Aerospace Engineering at the University of Minnesota, Twin Cities),
• fluid-structure interaction with Tayfun Tezduyar (Mechanical Engineering at Rice University),
• computational nanoelectronics with Mark Lundstrom (Electrical & Computer Engineering at Purdue University).
These activities were followed, in 2020, by a Birkhauser volume (edited by A. Grama and A. Sameh) containing two parts: part I consisting of some recent advances in high performance algorithms, and part II consisting of some selected challenging computational science and engineering applications. [ 80 ]
Cache coherence is a key problem in building shared memory multiprocessors. It was traditionally implemented in hardware via coherence protocols. However, the advent of systems like Cedar allowed one to consider a compiler-assisted implementation of cache coherence for parallel programs, [ 81 ] with minimal and completely local hardware support. Where a hardware coherence protocol like МESI relies on remote invalidation of cache lines, a compiler-assisted protocol performs a local self-invalidation as directed by a compiler.. CSRD researchers developed several different approaches to compiler-assisted coherence [ 82 ] [ 83 ] , [ 84 ] including a scheme with directory assistance. [ 85 ] All these schemes performed a post-invalidation at the end of a parallel region. This work has influenced research with numerous citations across decades until today [ 86 ] [ 87 ]
Early CSRD work on program optimization for classical parallel computers, also spurred developments of languages and compilers for more specialized accelerators, such as Graphics Processing Units (GPU). For example, in the early 2000s, CSRD researcher Rudolf Eigenmann developed translation methods for compilers that enabled programs written in the standard OpenMP programming model to be executed efficiently on GPUs. [ 88 ] [ 89 ] [ 90 ] Until then, GPUs had been programmed primarily in the specialized CUDA language. The new methods showed that high-level programming of GPUs was not only feasible for classical computational applications, but also for certain types of problems that exhibited irregular program patterns. This work incentivized further initiatives toward high-level programming models for GPUs and accelerators in general, such as OpenACC and OpenMP for accelerators. In turn, these initiatives contributed to the use of GPUs for a wide range of computational problems, including neural networks for deep-learning whose mathematical foundation was studied by Cybenko as discussed above.
|
https://en.wikipedia.org/wiki/University_of_Illinois_Center_for_Supercomputing_Research_and_Development
|
The School of the Environment at the University of Toronto is a trans-disciplinary academic unit that acts as a hub for the study of the environment , sustainability and climate change , offering undergraduate and graduate programs, along with joint programs with many disciplinary departments across the University. According to Maclean's Magazine, the School ranks second for environmental science programs in Canada. [ 1 ] The School's research focusses on knowledge mobilization on a range of environmental issues, addressing questions of how to integrate scientific knowledge with local, community-based, and Indigenous knowledge to address global environmental crises such as Climate Change . [ 2 ] The School is also home to many activist student groups advocating for environmental action. [ 3 ]
The current School of the Environment traces its history to three institutes at the University of Toronto. In 1959, the Great Lakes Institute was founded by Prof George Burwash Langford to study the impacts of pollution on the Great Lakes , [ 4 ] and the geologist Roger E. Deane served as its first director. [ 5 ] In 1971, under the directorship of physicist Don Misener , this became the Institute for Environmental Studies, [ 6 ] and offered the University's first graduate programs in environmental studies. [ 7 ] For many years, the Institute operated a field station at Baie Du Doré on the shores of Lake Huron , and a research ship, the HCMS Porte Dauphine . Independently, Innis College established an undergraduate Environmental Studies program in 1978, with courses taught by environmental activists such as NDP Leader Jack Layton and Ontario's first Environmental Commissioner, Eva Ligeti . A third unit, the undergraduate Division of the Environment was established in 1991 by the Faculty of Arts and Science, to administer degree programs in environmental studies.
In 2005, all three units were merged to form the Centre for Environment (CfE), under the directorship of the environmental philosopher Prof Ingrid Stefanovic. The Centre was then renamed as the School of the Environment in 2012. [ 8 ] The inaugural director of the School was the atmospheric physicist, Professor Kim Strong . [ 9 ]
The School offers major and minor programs in both Environmental Studies and Environmental Science, [ 10 ] as well as a range of interdisciplinary minor programs to be taken in conjunction with other majors across the Faculty of Arts and Science . It also offers a Certificate in Sustainability. [ 11 ]
The School offers two collaborative specialization programs, in Environmental Studies and in Environment and Health. These can be taken by graduate students enrolled in any program at the University of Toronto. In 2021, the School launched a 12-month thesis-based Masters of Environment and Sustainability. [ 12 ]
The School's 18 faculty members mainly hold joint appointments with a variety of discipline-based departments at the University of Toronto, spanning the physical sciences, social sciences, and humanities. [ 13 ] The School also has 141 graduate faculty members who hold appointments in other departments, and contribute to teaching in the School's graduate programs.
Notable faculty include:
Research at the School spans a broad range of areas, including using the University campus itself as a Living Lab for sustainability, [ 14 ] policies needed to tackle climate change, [ 15 ] the study of persistent toxins in the environment and their impact on human health, [ 16 ] and the role of cycling in urban transportation policy. [ 17 ]
|
https://en.wikipedia.org/wiki/University_of_Toronto_School_of_the_Environment
|
In mathematics , an unordered pair or pair set is a set of the form { a , b }, i.e. a set having two elements a and b with no particular relation between them , where { a , b } = { b , a }. In contrast, an ordered pair ( a , b ) has a as its first element and b as its second element, which means ( a , b ) ≠ ( b , a ).
While the two elements of an ordered pair ( a , b ) need not be distinct, modern authors only call { a , b } an unordered pair if a ≠ b . [ 1 ] [ 2 ] [ 3 ] [ 4 ] But for a few authors a singleton is also considered an unordered pair, although today, most would say that { a , a } is a multiset . It is typical to use the term unordered pair even in the situation where the elements a and b could be equal, as long as this equality has not yet been established.
A set with precisely two elements is also called a 2-set or (rarely) a binary set .
An unordered pair is a finite set ; its cardinality (number of elements) is 2 or (if the two elements are not distinct) 1.
In axiomatic set theory , the existence of unordered pairs is required by an axiom, the axiom of pairing .
More generally, an unordered n -tuple is a set of the form { a 1 , a 2 ,... a n }. [ 5 ] [ 6 ] [ 7 ]
|
https://en.wikipedia.org/wiki/Unordered_pair
|
In chemistry , an unpaired electron is an electron that occupies an orbital of an atom singly, rather than as part of an electron pair . Each atomic orbital of an atom (specified by the three quantum numbers n, l and m) has a capacity to contain two electrons ( electron pair ) with opposite spins . As the formation of electron pairs is often energetically favourable, either in the form of a chemical bond or as a lone pair , unpaired electrons are relatively uncommon in chemistry, because an entity that carries an unpaired electron is usually rather reactive. In organic chemistry they typically only occur briefly during a reaction on an entity called a radical ; however, they play an important role in explaining reaction pathways.
Radicals are uncommon in s- and p-block chemistry, since the unpaired electron occupies a valence p orbital or an sp, sp 2 or sp 3 hybrid orbital . These orbitals are strongly directional and therefore overlap to form strong covalent bonds, favouring dimerisation of radicals. Radicals can be stable if dimerisation would result in a weak bond or the unpaired electrons are stabilised by delocalisation . In contrast, radicals in d- and f-block chemistry are very common. The less directional, more diffuse d and f orbitals, in which unpaired electrons reside, overlap less effectively, form weaker bonds and thus dimerisation is generally disfavoured. These d and f orbitals also have comparatively smaller radial extension, disfavouring overlap to form dimers. [ 1 ]
Relatively more stable entities with unpaired electrons do exist, e.g. the nitric oxide molecule has one. According to Hund's rule , the spins of unpaired electrons are aligned parallel and this gives these molecules paramagnetic properties.
The most stable examples of unpaired electrons are found on the atoms and ions of lanthanides and actinides . The incomplete f-shell of these entities does not interact very strongly with the environment they are in and this prevents them from being paired. The ions with the largest number of unpaired electrons are Gd 3+ and Cm 3+ with seven unpaired electrons.
An unpaired electron has a magnetic dipole moment , while an electron pair has no dipole moment because the two electrons have opposite spins so their magnetic dipole fields are in opposite directions and cancel. Thus an atom with unpaired electrons acts as a magnetic dipole and interacts with a magnetic field . Only elements with unpaired electrons exhibit paramagnetism , ferromagnetism , and antiferromagnetism .
This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Unpaired_electron
|
In computing , an unparser is a system that constructs a set of characters or image components from a given parse tree . [ 1 ] [ 2 ]
An unparser is in effect the reverse of a traditional parser that takes a set of string of characters and produces a parse tree. Unparsing generally involves the application of a specific set of rules to the parse tree as a " tree walk " takes place. [ 1 ]
Given that the tree may involve both textual and graphic elements, the unparser may have two separate modules, each of which handles the relevant components. [ 2 ] In such cases the "master unparser" looks up the "master unparse table" to determine if a given nested structure should be handled by one module, or the other. [ 2 ]
This computing article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Unparser
|
In theoretical physics , unparticle physics is a speculative theory that conjectures a form of matter that cannot be explained in terms of particles using the Standard Model of particle physics, because its components are scale invariant .
Howard Georgi proposed this theory in two 2007 papers, "Unparticle Physics" [ 1 ] and "Another Odd Thing About Unparticle Physics". [ 2 ] His papers were followed by further work by other researchers into the properties and phenomenology of unparticle physics and its potential impact on particle physics , astrophysics , cosmology , CP violation , lepton flavour violation, muon decay , neutrino oscillations , and supersymmetry .
All particles exist in states that may be characterized by a certain energy , momentum and mass . In most of the Standard Model of particle physics, particles of the same type cannot exist in another state with all these properties scaled up or down by a common factor – electrons , for example, always have the same mass regardless of their energy or momentum. But this is not always the case: massless particles, such as photons , can exist with their properties scaled equally. This immunity to scaling is called "scale invariance".
The idea of unparticles comes from conjecturing that there may be "stuff" that does not necessarily have zero mass but is still scale-invariant, with the same physics regardless of a change of length (or equivalently energy). This stuff is unlike particles, and described as unparticle. The unparticle stuff is equivalent to particles with a continuous spectrum of mass. [ 3 ]
Such unparticle stuff has not been observed, which suggests that if it exists, it must couple with normal matter weakly at observable energies. Since the Large Hadron Collider (LHC) team announced it will begin probing a higher energy frontier in 2009, some theoretical physicists have begun to consider the properties of unparticle stuff and how it may appear in LHC experiments. One of the great hopes for the LHC is that it might come up with some discoveries that will help us update or replace our best description of the particles that make up matter and the forces that glue them together.
Unparticles would have properties in common with neutrinos , which have almost zero mass and are therefore nearly scale invariant . Neutrinos barely interact with matter – most of the time physicists can infer their presence only by calculating the "missing" energy and momentum after an interaction. By looking at the same interaction many times, a probability distribution is built up that tells more specifically how many and what sort of neutrinos are involved. They couple very weakly to ordinary matter at low energies, and the effect of the coupling increases as the energy increases.
A similar technique could be used to search for evidence of unparticles. According to scale invariance, a distribution containing unparticles would become apparent because it would resemble a distribution for a fractional number of massless particles.
This scale invariant sector would interact very weakly with the rest of the Standard Model, making it possible to observe evidence for unparticle stuff, if it exists. The unparticle theory is a high-energy theory that contains both Standard Model fields and Banks–Zaks fields , which have scale-invariant behavior at an infrared point. The two fields can interact through the interactions of ordinary particles if the energy of the interaction is sufficiently high.
These particle interactions would appear to have "missing" energy and momentum that would not be detected by the experimental apparatus. Certain distinct distributions of missing energy would signify the production of unparticle stuff. If such signatures are not observed, bounds on the model can be set and refined.
Unparticle physics has been proposed as an explanation for anomalies in superconducting cuprate materials, [ 4 ] where the charge measured by ARPES appears to exceed predictions from Luttinger's theorem for the quantity of electrons. [ 5 ]
|
https://en.wikipedia.org/wiki/Unparticle_physics
|
Unresolved complex mixture ( UCM ), or hump , is a feature frequently observed in gas chromatographic (GC) data of crude oils and extracts from organisms exposed to oil. [ 1 ]
The reason for the UCM hump appearance is that GC cannot resolve and identify a significant part of the hydrocarbons in crude oils. The resolved components appear as peaks while the UCM appears as a large background/platform. In non- biodegraded oils the UCM may comprise less than 50% of the total area of the chromatogram, while in biodegraded oils this figure can rise to over 90%. UCMs are also observed in certain refined fractions such as lubricating oils [ 1 ] and references therein.
In attempting to determine "the processes that regulate the fate of petroleum following release to the environment,” geochemist Christopher M. Reddy of Woods Hole Oceanographic Institution invented an application of comprehensive two-dimensional gas chromatography (GCxGC) that resolves UMPs [ 2 ] and that he patented. [ 3 ] As it degrades in a marine environment, oil undergoes complex transformations, producing residues composed of extremely complex organic mixtures that accumulate in such “protective environments” [ 2 ] as fiddler crabs [ 4 ] and marsh grass. [ 5 ] These residues form the majority of the unresolved complex mixture (UCM) resulting from the breakdown of crude oils that GC had previously been unable to resolve but which Reddy’s novel GCxGC application has made accessible, enabling determination of “the underlying processes controlling petroleum fate” as it degrades in a marine environment. [ 2 ]
The technique Reddy invented is now widely applied in the characterization of petroleum in environmental samples as well as in analyses of other complex organic mixtures, and, because of it, GCxGC has transitioned from “a niche qualitative analysis tool to a robust quantitative technique.” [ 2 ] For this innovative work, Reddy was awarded the Clair C. Patterson Award in 2014 by the Geochemical Society for "an innovative breakthrough in environmental geochemistry of fundamental significance within the last decade, particularly in service to society. To be viewed as innovative, the work must show a high degree of creativity and/or be a fundamental departure from usual practice while contributing significantly to understanding in environmental geochemistry." [ 6 ]
Reddy's first investigation into oil spills employing the new method was at the West Falmouth Harbor of Massachusetts, where the barge Florida had run aground in 1969, spilling 175,000 gallons of heating oil. Reddy and his team studied the area from 1999 to 2008, identifying chemical and biological effects that persisted even after 30 years. [ 7 ] According to geologist and biogeoscientist Timothy Eglinton , at the time Reddy received the Patterson Award, the "string of papers" he and his team members had published [ 4 ] [ 5 ] [ 8 ] "on this oil spill ... collectively represent[ed] amongst the most comprehensive, sustained and multifaceted investigations of the environmental fate of a single petroleum spill" published to date, thanks to Reddy's use of the novel GCxGC method he had pioneered. [ 2 ]
One reason why it is important to study the nature of UCMs is that some have been shown to contain toxic components, [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] but only a small range of known petrogenic toxicants, such as the USEPA list of 16 polycyclic aromatic hydrocarbons (PAHs), tend to be routinely monitored in the environment.
Analysis of the hydrocarbon fraction of crude oils by GC reveals a complex mixture containing many thousands of individual components. [ 18 ] Components that are resolved by GC have been extensively studied e.g. [ 19 ] However, despite the application of many analytical techniques the remaining components have, until very recently, proved difficult to separate due to the large numbers of co-eluting compounds. Gas chromatograms of mature oils have prominent n-alkane peaks which distract attention from the underlying unresolved complex mixture (UCM) of hydrocarbons often referred to as the ‘hump’. Processes such as weathering and biodegradation result in a relative enrichment of the UCM component by removal of resolved components and the creation of new compounds. [ 20 ] It has been shown that both resolved and unresolved components of oils are subject to concurrent biodegradation, [ 1 ] i.e. it is not a sequential process, but due to the recalcitrant nature of some components, the rates of biodegradation of individual compounds greatly varies. The UCM fraction often represents the major component of hydrocarbons within hydrocarbon-polluted sediments [ 12 ] (see reference therein) and biota e.g. [ 9 ] [ 10 ] [ 21 ] [ 22 ] A number of studies has now demonstrated that aqueous exposure to components within the UCM can affect the health of marine organisms, [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] including possible hormonal disruption, [ 16 ] and high concentrations of environmental UCMs have been strongly implicated with impaired health in wild populations. [ 11 ] [ 14 ] [ 23 ] [ 24 ]
Environmental UCMs result from highly degraded petroleum hydrocarbons and once formed they can stay largely unchanged in sediments for many years. For example, in 1969 a diesel oil spill contaminated saltmarsh sediment within Wild Harbor River , US; by 1973 only a baseline hump was observed, which remained largely unchanged within the anaerobic sediment for the next 30 years. [ 25 ] In a study of the potential for UCM-dominated oil to be further degraded, it was concluded that even using bacteria specifically adapted for complex UCM hydrocarbons in conjunction with nutrient enrichment, biodegradation rates would still be relatively slow. [ 26 ] Bacterial degradation of hydrocarbons is complex and will depend on environmental conditions (e.g. aerobic or anaerobic, temperature, nutrient availability, available species of bacteria etc.).
A relatively recent analytical tool that has been used for the separation of UCMs is comprehensive two-dimensional GC ( GCxGC ). This powerful technique, introduced by Liu and Phillips [ 27 ] combines two GC columns with different separation mechanisms: typically a primary column that separates compounds based on volatility coupled to a second short column that separates by polarity. The two columns are connected by a modulator, a device that traps, focuses and re-injects the peaks that elute from the first column into the second column. Each peak eluting from the first column (which may be a number of co-eluting peaks) is further separated on the second column. The second separation is rapid, allowing the introduction of subsequent fractions from the first column without mutual interference. Dallüge et al. [ 28 ] reviewed the principles, advantages and main characteristics of this technique. One of the main advantages is the very high separation power, making the technique ideal for unravelling the composition of complex mixtures. Another important feature of GC×GC is that chemically related compounds show up as ordered structures within the chromatograms, i.e. isomers appear as distinct groups in the chromatogram as a result of their similar interaction with the second dimension column phase. [ 29 ] The use of GC×GC for the characterization of complex petrochemical mixtures has been extensively reviewed. [ 30 ] Most research into petrochemical hydrocarbons using GC×GC has utilised flame ionisation detection (FID) but mass spectrometry (MS) is necessary to obtain the structural information necessary to identify unknown compounds. Currently, only time-of-flight MS (ToF-MS) can deliver the high acquisition rates required to analyse GC×GC.
There is compelling evidence that components within some UCMs are toxic to marine organisms . The clearance rate (also known as feeding feed) of mussels was reduced by 40% following exposure to a monoaromatic UCM derived from a Norwegian crude oil. [ 17 ] The toxicity of monoaromatic UCM components was further evidenced by an elegant set of experiments using transplantations of clean and polluted mussels. [ 10 ] Recent analysis by GC×GC-ToF-MS of UCMs extracted from the mussel tissues, has shown that they contain a vast array of both known and unknown compounds. [ 11 ] The comparative analysis of UCMs extracted from mussels known to possess high, moderate and low Scope for Growth (SfG), a measure of the capacity for growth and reproduction, [ 31 ] revealed that branched alkylbenzenes represented the largest structural class within the UCM of mussels with low SfG; also, branched isomers of alkyl tetralins , alkyl indanes and alkyl indenes were prominent in the stressed mussels. [ 11 ] Laboratory toxicity tests using both commercially available and specially synthesised compounds revealed that such branched alkylated structures were capable of producing the observed poor health of the mussels. [ 11 ] [ 14 ] The reversible effects observed in mussels following exposure to the UCM hydrocarbons identified to date are consistent with non-specific narcosis (also known as baseline) mode of action of toxicity. [ 13 ] There is no evidence that toxic UCM components can biomagnify through the food chain . Crabs ( Carcinus maenas ) that were fed a diet of mussels contaminated with environmentally realistic concentrations of branched alkylbenzenes, suffered behavioural disruption but only a small concentration of the compounds were retained in the midgut of the crabs. [ 15 ] Within marsh sediments still contaminated with high concentrations of UCM hydrocarbons from the Florida barge oil spill in 1969 (see above,) the behaviour and feeding of fiddler crabs ( Uca pugnax ) was reported to be affected. [ 32 ]
Much of the past research into the composition and toxicity of UCM hydrocarbons has been conducted by the Petroleum and Environmental Geochemistry Group (PEGG) [ 33 ] at the University of Plymouth, UK. As well as the hydrocarbon UCM, oils also contain more polar compounds such as those containing oxygen, sulphur or nitrogen. These compounds can be very soluble in water and hence bioavailable to marine and aquatic organisms. Polar UCMs are present within produced waters from oil rigs and from oil sands processing. A polar UCM fraction extracted from North Sea oil produced water was reported to elicit hormonal disruption by way of both estrogen receptor agonist and androgen receptor agonist activity. [ 16 ] Ongoing concern regarding the potential toxicity of components within Athabasca Oil Sands (Canada) tailings ponds has highlighted the need for identification of the compounds present. Until recently, such positive identification of individual so-called naphthenic acids from oil sands produced waters had so far eluded characterisation but recent research by PEGG presented at a SETAC conference in 2010 [ 34 ] revealed that, using a new GCxGC-TOF-MS, it was possible to resolve and identify a range of new compounds within such highly complex extracts. One group of compounds found to be present were tricyclic diamondoid acids. [ 35 ] These structures had previously not even been considered as naphthenic acids and suggests an unprecedented degree of biodegradation of some of the oil in the oil sands .
|
https://en.wikipedia.org/wiki/Unresolved_complex_mixture
|
The Unruh effect (also known as the Fulling–Davies–Unruh effect , not to be confused with German for perturbation ) is a theoretical prediction in quantum field theory that an observer who is uniformly accelerating through empty space will perceive a thermal bath . This means that even in the absence of any external heat sources, an accelerating observer will detect particles and experience a temperature. In contrast, an inertial observer in the same region of spacetime would observe no temperature. [ 1 ]
In other words, the background appears to be warm from an accelerating reference frame . In layman's terms, an accelerating thermometer in empty space (like one being waved around), without any other contribution to its temperature, will record a non-zero temperature, just from its acceleration. Heuristically, for a uniformly accelerating observer, the ground state of an inertial observer is seen as a mixed state in thermodynamic equilibrium with a non-zero temperature bath.
The Unruh effect was first described by Stephen Fulling in 1973, Paul Davies in 1975 and W. G. Unruh in 1976. [ 2 ] [ 3 ] [ 4 ] It is currently not clear whether the Unruh effect has actually been observed, since the claimed observations are disputed. There is also some doubt about whether the Unruh effect implies the existence of Unruh radiation .
The Unruh temperature , sometimes called the Davies–Unruh temperature, [ 5 ] was derived separately by Paul Davies [ 3 ] and William Unruh [ 4 ] and is the effective temperature experienced by a uniformly accelerating detector in a vacuum field . It is given by [ 6 ]
where ħ is the reduced Planck constant , a is the proper uniform acceleration, c is the speed of light , and k B is the Boltzmann constant . Thus, for example, a proper acceleration of 2.47 × 10 20 m⋅s −2 corresponds approximately to a temperature of 1 K . Conversely, an acceleration of 1 m⋅s −2 corresponds to a temperature of 4.06 × 10 −21 K .
The Unruh temperature has the same form as the Hawking temperature T H = ħg / 2π ck B with g denoting the surface gravity of a black hole , which was derived by Stephen Hawking in 1974. [ 7 ] In the light of the equivalence principle , it is, therefore, sometimes called the Hawking–Unruh temperature. [ 8 ]
Solving the Unruh temperature for the uniform acceleration, it can be expressed as
where a P {\displaystyle a_{\mathrm {P} }} is Planck acceleration and T P {\displaystyle T_{\mathrm {P} }} is Planck temperature .
Unruh demonstrated theoretically that the notion of vacuum depends on the path of the observer through spacetime . From the viewpoint of the accelerating observer, the vacuum of the inertial observer will look like a state containing many particles in thermal equilibrium—a warm gas. [ 9 ]
The Unruh effect would only appear to an accelerating observer. And although the Unruh effect would initially be perceived as counter-intuitive, it makes sense if the word vacuum is interpreted in the following specific way. In quantum field theory , the concept of " vacuum " is not the same as "empty space": Space is filled with the quantized fields that make up the universe . Vacuum is simply the lowest possible energy state of these fields.
The energy states of any quantized field are defined by the Hamiltonian , based on local conditions, including the time coordinate. According to special relativity , two observers moving relative to each other must use different time coordinates. If those observers are accelerating, there may be no shared coordinate system. Hence, the observers will see different quantum states and thus different vacua.
In some cases, the vacuum of one observer is not even in the space of quantum states of the other. In technical terms, this comes about because the two vacua lead to unitarily inequivalent representations of the quantum field canonical commutation relations . This is because two mutually accelerating observers may not be able to find a globally defined coordinate transformation relating their coordinate choices.
An accelerating observer will perceive an apparent event horizon forming (see Rindler spacetime ). The existence of Unruh radiation could be linked to this apparent event horizon , putting it in the same conceptual framework as Hawking radiation . On the other hand, the theory of the Unruh effect explains that the definition of what constitutes a "particle" depends on the state of motion of the observer.
The free field needs to be decomposed into positive and negative frequency components before defining the creation and annihilation operators . This can only be done in spacetimes with a timelike Killing vector field. This decomposition happens to be different in Cartesian and Rindler coordinates (although the two are related by a Bogoliubov transformation ). This explains why the "particle numbers", which are defined in terms of the creation and annihilation operators, are different in both coordinates.
The Rindler spacetime has a horizon, and locally any non-extremal black hole horizon is Rindler. So the Rindler spacetime gives the local properties of black holes and cosmological horizons . It is possible to rearrange the metric restricted to these regions to obtain the Rindler metric. [ 10 ] The Unruh effect would then be the near-horizon form of Hawking radiation .
The Unruh effect is also expected to be present in de Sitter space . [ 11 ]
It is worth stressing that the Unruh effect only says that, according to uniformly-accelerated observers, the vacuum state is a thermal state specified by its temperature, and one should resist reading too much into the thermal state or bath. Different thermal states or baths at the same temperature need not be equal, for they depend on the Hamiltonian describing the system. In particular, the thermal bath seen by accelerated observers in the vacuum state of a quantum field is not the same as a thermal state of the same field at the same temperature according to inertial observers. Furthermore, uniformly accelerated observers, static with respect to each other, can have different proper accelerations a (depending on their separation), which is a direct consequence of relativistic red-shift effects. This makes the Unruh temperature spatially inhomogeneous across the uniformly accelerated frame. [ 12 ]
In special relativity , an observer moving with uniform proper acceleration a through Minkowski spacetime is conveniently described with Rindler coordinates , which are related to the standard ( Cartesian ) Minkowski coordinates by
The line element in Rindler coordinates, i.e. Rindler space is
where ρ = 1 / a , and where σ is related to the observer's proper time τ by σ = aτ (here c = 1 ).
An observer moving with fixed ρ traces out a hyperbola in Minkowski space, therefore this type of motion is called hyperbolic motion . The coordinate ρ {\displaystyle \rho } is related to the Schwarzschild spherical coordinate r S {\displaystyle r_{S}} by the relation [ 13 ]
An observer moving along a path of constant ρ is uniformly accelerating, and is coupled to field modes which have a definite steady frequency as a function of σ . These modes are constantly Doppler shifted relative to ordinary Minkowski time as the detector accelerates, and they change in frequency by enormous factors, even after only a short proper time.
Translation in σ is a symmetry of Minkowski space: it can be shown that it corresponds to a boost in x , t coordinate around the origin. Any time translation in quantum mechanics is generated by the Hamiltonian operator. For a detector coupled to modes with a definite frequency in σ , we can treat σ as "time" and the boost operator is then the corresponding Hamiltonian. In Euclidean field theory, where the minus sign in front of the time in the Rindler metric is changed to a plus sign by multiplying i {\displaystyle i} to the Rindler time, i.e. a Wick rotation or imaginary time, the Rindler metric is turned into a polar-coordinate-like metric. Therefore any rotations must close themselves after 2 π in a Euclidean metric to avoid being singular. So
A path integral with real time coordinate is dual to a thermal partition function, related by a Wick rotation . The periodicity β {\displaystyle \beta } of imaginary time corresponds to a temperature of β = 1 / T {\displaystyle \beta =1/T} in thermal quantum field theory . Note that the path integral for this Hamiltonian is closed with period 2 π . This means that the H modes are thermally occupied with temperature 1 / 2 π . This is not an actual temperature, because H is dimensionless. It is conjugate to the timelike polar angle σ , which is also dimensionless. To restore the length dimension, note that a mode of fixed frequency f in σ at position ρ has a frequency which is determined by the square root of the (absolute value of the) metric at ρ , the redshift factor. This can be seen by transforming the time coordinate of a Rindler observer at fixed ρ to an inertial, co-moving observer observing a proper time . From the Rindler-line-element given above, this is just ρ . The actual inverse temperature at this point is therefore
It can be shown that the acceleration of a trajectory at constant ρ in Rindler coordinates is equal to 1 / ρ , so the actual inverse temperature observed is
Restoring units yields
The temperature of the vacuum, seen by an isolated observer accelerating at the Earth's gravitational acceleration of g = 9.81 m·s −2 , is only 4 × 10 −20 K . For an experimental test of the Unruh effect it is planned to use accelerations up to 10 26 m·s −2 , which would give a temperature of about 400 000 K . [ 14 ] [ 15 ]
The Rindler derivation of the Unruh effect is unsatisfactory to some [ who? ] , since the detector's path is super-deterministic . Unruh later developed the Unruh–DeWitt particle detector model to circumvent this objection.
The Unruh effect would also cause the decay rate of accelerating particles to differ from inertial particles. Stable particles like the electron could have nonzero transition rates to higher mass states when accelerating at a high enough rate. [ 16 ] [ 17 ] [ 18 ]
Although Unruh's prediction that an accelerating detector would see a thermal bath is not controversial, the interpretation of the transitions in the detector in the non-accelerating frame is. [ citation needed ] It is widely, although not universally, believed that each transition in the detector is accompanied by the emission of a particle, and that this particle will propagate to infinity and be seen as Unruh radiation .
The existence of Unruh radiation is not universally accepted. Smolyaninov claims that it has already been observed, [ 19 ] while O'Connell and Ford claim that it is not emitted at all. [ 20 ] While these skeptics accept that an accelerating object thermalizes at the Unruh temperature, they do not believe that this leads to the emission of photons, arguing that the emission and absorption rates of the accelerating particle are balanced.
Researchers claim experiments that successfully detected the Sokolov–Ternov effect [ 21 ] may also detect the Unruh effect under certain conditions. [ 22 ]
Theoretical work in 2011 suggests that accelerating detectors could be used for the direct detection of the Unruh effect with current technology. [ 23 ]
The Unruh effect may have been observed for the first time in 2019 in the high energy channeling radiation explored by the NA63 experiment at CERN. [ 24 ]
|
https://en.wikipedia.org/wiki/Unruh_effect
|
An unsaturated fat is a fat or fatty acid in which there is at least one double bond within the fatty acid chain. A fatty acid chain is monounsaturated if it contains one double bond, and polyunsaturated if it contains more than one double bond.
A saturated fat has no carbon-to-carbon double bonds, so the maximum possible number of hydrogen is bonded to carbon, and thus, is considered to be "saturated" with hydrogen atoms. To form carbon-to-carbon double bonds, hydrogen atoms are removed from the carbon chain. In cellular metabolism , unsaturated fat molecules contain less energy (i.e., fewer calories ) than an equivalent amount of saturated fat. The greater the degree of unsaturation in a fatty acid (i.e., the more double bonds in the fatty acid) the more susceptible it becomes to lipid peroxidation ( rancidity ). Antioxidants can protect unsaturated fat from lipid peroxidation.
In chemical analysis, fats are broken down to their constituent fatty acids, which can be analyzed in various ways. In one approach, fats undergo transesterification to give fatty acid methyl esters (FAMEs), which are amenable to separation and quantitation using gas chromatography . [ 1 ] Classically, unsaturated isomers were separated and identified by argentation thin-layer chromatography. [ 2 ]
The saturated fatty acid components are almost exclusively stearic (C18) and palmitic acids (C16). Monounsaturated fats are almost exclusively oleic acid . Linolenic acid comprises most of the triunsaturated fatty acid component.
Although polyunsaturated fats are protective against cardiac arrhythmias , a study of post- menopausal women with a relatively low fat intake showed that polyunsaturated fat is positively associated with progression of coronary atherosclerosis , whereas monounsaturated fat is not. [ 4 ] This probably is an indication of the greater vulnerability of polyunsaturated fats to lipid peroxidation , against which vitamin E has been shown to be protective. [ 5 ]
Examples of unsaturated fatty acids are palmitoleic acid , oleic acid , myristoleic acid , linoleic acid , and arachidonic acid . Foods containing unsaturated fats include avocado , nuts , olive oils , and vegetable oils such as canola . Meat products contain both saturated and unsaturated fats.
Although unsaturated fats are conventionally regarded as 'healthier' than saturated fats, [ 6 ] the United States Food and Drug Administration (FDA) recommendation stated that the amount of unsaturated fat consumed should not exceed 30% of one's daily caloric intake. [ 7 ] Most foods contain both unsaturated and saturated fats. Marketers [ who? ] advertise only one or the other, depending on which one makes up the majority. Thus, various unsaturated fat vegetable oils, such as olive oils, also contain saturated fat. [ 8 ]
Studies on the cell membranes of mammals and reptiles discovered that mammalian cell membranes are composed of a higher proportion of polyunsaturated fatty acids ( DHA , omega-3 fatty acid ) than reptiles . [ 9 ] Studies on bird fatty acid composition have noted similar proportions to mammals but with 1/3rd less omega-3 fatty acids as compared to omega-6 for a given body size. [ 10 ] This fatty acid composition results in a more fluid cell membrane but also one that is permeable to various ions (H+ & Na+), resulting in cell membranes that are more costly to maintain. This maintenance cost has been argued to be one of the key causes for the high metabolic rates and concomitant warm-bloodedness of mammals and birds. [ 9 ] However polyunsaturation of cell membranes may also occur in response to chronic cold temperatures as well. In fish increasingly cold environments lead to increasingly high cell membrane content of both monounsaturated and polyunsaturated fatty acids, to maintain greater membrane fluidity (and functionality) at the lower temperatures . [ 11 ] [ 12 ]
|
https://en.wikipedia.org/wiki/Unsaturated_fat
|
The unseen species problem in ecology deals with the estimation of the number of species represented in an ecosystem that were not observed by samples. It more specifically relates to how many new species would be discovered if more samples were taken in an ecosystem. The study of the unseen species problem was started in the early 1940s, by Alexander Steven Corbet . He spent two years in British Malaya trapping butterflies and was curious how many new species he would discover if he spent another two years trapping. Many different estimation methods have been developed to determine how many new species would be discovered given more samples.
The unseen species problem also applies more broadly, as the estimators can be used to estimate any new elements of a set not previously found in samples. An example of this is determining how many words William Shakespeare knew based on all of his written works. [ 1 ]
The unseen species problem can be broken down mathematically as follows: If n {\displaystyle n} independent samples are taken, X n ≜ X 1 , … , X n {\displaystyle X^{n}\triangleq X_{1},\ldots ,X_{n}} , and then if m {\displaystyle m} more independent samples were taken, the number of unseen species that will be discovered by the additional samples is given by U ≜ U ( X n , X n + 1 m + n ) ≜ | { X n + 1 m + n } ∖ { X n } | , {\displaystyle U\triangleq U(X^{n},X_{n+1}^{m+n})\triangleq \left|\{X_{n+1}^{m+n}\}\setminus \{X^{n}\}\right|,} with X n + 1 m + n ≜ X n + 1 , … , X n + m {\displaystyle X_{n+1}^{m+n}\triangleq X_{n+1},\ldots ,X_{n+m}} being the second set of m {\displaystyle m} samples.
In the early 1940s Alexander Steven Corbet spent 2 years in British Malaya trapping butterflies. [ 2 ] He kept track of how many species he observed, and how many members of each species were captured. For example, there were 74 different species of which he captured only 2 individual butterflies.
When Corbet returned to the United Kingdom, he approached biostatistician Ronald Fisher and asked how many new species of butterflies he could expect to catch if he went trapping for another two years; [ 3 ] in essence, Corbet was asking how many species he observed zero times.
Fisher responded with a simple estimation: for an additional 2 years of trapping, Corbet could expect to capture 75 new species. He did this using a simple summation (data provided by Orlitsky [ 3 ] in the table from the Example below: U = ∑ i = 1 n ( − 1 ) i + 1 φ i = 118 − 74 + 44 − 24 + ⋯ − 12 + 6 = 75. {\displaystyle U=\sum _{i=1}^{n}(-1)^{i+1}\varphi _{i}=118-74+44-24+\cdots -12+6=75.} Here φ i {\displaystyle \varphi _{i}} corresponds to the number of individual species that were observed i {\displaystyle i} times. Fisher's sum was later confirmed by Good–Toulmin. [ 2 ]
To estimate the number of unseen species, let t ≜ m / n {\displaystyle t\triangleq m/n} be the number of future samples ( m {\displaystyle m} ) divided by the number of past samples ( n {\displaystyle n} ), or m = t n {\displaystyle m=tn} . Let φ i {\displaystyle \varphi _{i}} be the number of individual species observed i {\displaystyle i} times (for example, if there were 74 species of butterflies with 2 observed members throughout the samples, then φ 2 = 74 {\displaystyle \varphi _{2}=74} ).
The Good–Toulmin (GT) estimator was developed by Good and Toulmin in 1953. [ 4 ] The estimate of the unseen species based on the Good–Toulmin estimator is given by U GT ≜ U GT ( X n , t ) ≜ − ∑ i = 1 ∞ ( − t ) i φ i . {\displaystyle U^{\text{GT}}\triangleq U^{\text{GT}}(X^{n},t)\triangleq -\sum _{i=1}^{\infty }(-t)^{i}\varphi _{i}.} The Good–Toulmin Estimator has been shown to be a good estimate for values of t ≤ 1. {\displaystyle t\leq 1.} The Good–Toulmin estimator also approximately satisfies E ( U GT − U ) 2 ≲ n t 2 . {\displaystyle \operatorname {\mathbb {E} } (U^{\text{GT}}-U)^{2}\lesssim nt^{2}.} This means that U GT {\displaystyle U^{\text{GT}}} estimates U {\displaystyle U} to within n ⋅ t , {\displaystyle {\sqrt {n}}\cdot t,} as long as t ≤ 1. {\displaystyle t\leq 1.}
However, for t > 1 , {\displaystyle t>1,} , the Good–Toulmin estimator fails to capture accurate results. This is because, if t > 1 , {\displaystyle t>1,} U GT {\displaystyle U^{\text{GT}}} increases by ( − t ) i φ i {\displaystyle (-t)^{i}\varphi _{i}} for i {\displaystyle i} with φ i > 0 , {\displaystyle \varphi _{i}>0,} meaning that if φ i > 0 , {\displaystyle \varphi _{i}>0,} U GT {\displaystyle U^{\text{GT}}} grows super-linearly in t , {\displaystyle t,} but U {\displaystyle U} can grow at most linearly with t . {\displaystyle t.} Therefore, when t > 1 , {\displaystyle t>1,} U GT {\displaystyle U^{\text{GT}}} grows faster than U {\displaystyle U} and does not approximate the true value. [ 3 ]
To compensate for this, Efron and Thisted in 1976 [ 1 ] showed that a truncated Euler transform can also be a usable estimate (the "ET" estimate): U ET ≜ ∑ i = 1 n h h ET ⋅ φ i , {\displaystyle U^{\text{ET}}\triangleq \sum _{i=1}^{n}h_{h}^{\text{ET}}\cdot \varphi _{i},} with h i ET ≜ ( − t ) i + 1 ⋅ P ( X ≥ i ) , {\displaystyle h_{i}^{\text{ET}}\triangleq (-t)^{i+1}\cdot \mathbb {P} (X\geq i),} where X ∼ Bin ( k , 1 1 + t ) , {\displaystyle X\sim \operatorname {Bin} \left(k,{\frac {1}{1+t}}\right),} and P ( X ≥ i ) = { ∑ j = i k ( k j ) t k − j ( 1 + t ) k for i ≤ k , 0 for i > k , {\displaystyle \mathbb {P} (X\geq i)={\begin{cases}\displaystyle \sum _{j=i}^{k}{\binom {k}{j}}{\frac {t^{k-j}}{(1+t)^{k}}}&{\text{ for }}i\leq k,\\0&{\text{ for }}i>k,\end{cases}}} where k {\displaystyle k} is the location chosen to truncate the Euler transform.
Similar to the approach by Efron and Thisted, Alon Orlitsky , Ananda Theertha Suresh, and Yihong Wu developed the smooth Good–Toulmin estimator. They realized that the Good–Toulmin estimator failed because of the exponential growth, and not its bias. [ 3 ] Therefore, they estimated the number of unseen species by truncating the series U l ≜ − ∑ i = 1 l ( − t ) i φ i . {\displaystyle U^{l}\triangleq -\sum _{i=1}^{l}(-t)^{i}\varphi _{i}.} Orlitsky, Suresh, and Wu also noted that for distributions with t > 1 {\displaystyle t>1} , the driving term in the summation estimate is the l − th {\displaystyle l-{\text{th}}} term, regardless of which value of l {\displaystyle l} is chosen. [ 2 ] To solve this, they selected a random nonnegative integer L {\displaystyle L} , truncated the series at L {\displaystyle L} , and then took the average over a distribution about L {\displaystyle L} . [ 3 ] The resulting estimator is U L = E L [ − ∑ i = 1 L ( − t ) i φ i ] . {\displaystyle U^{L}=\operatorname {E} _{L}\left[-\sum _{i=1}^{L}(-t)^{i}\varphi _{i}\right].} This method was chosen because the bias of U l {\displaystyle U^{l}} shifts signs due to the ( − t ) i {\displaystyle (-t)^{i}} coefficient. Averaging over a distribution of L {\displaystyle L} therefore reduces the bias. This means that the estimator can be written as the linear combination of the prevalence: [ 2 ] U L = E L [ − ∑ i ≥ 1 ( − t ) i φ i 1 i ≤ L ] = − ∑ i ≥ 1 ( − t ) i Pr ( L ≥ i ) φ i . {\displaystyle U^{L}=\operatorname {E} _{L}\left[-\sum _{i\geq 1}(-t)^{i}\varphi _{i}\mathbf {1} _{i\leq L}\right]=-\sum _{i\geq 1}(-t)^{i}\Pr(L\geq i)\varphi _{i}.} Depending on the distribution of L {\displaystyle L} chosen, the results will vary. With this method, estimates can be made for t ∝ ln n {\displaystyle t\propto \ln n} , and this is the best possible. [ 3 ]
The species discovery curve can also be used. This curve relates the number of species found in an area as a function of the time. These curves can also be created by using estimators (such as the Good–Toulmin estimator) and plotting the number of unseen species at each value for t {\displaystyle t} . [ 5 ]
A species discovery curve is always increasing, as there is never a sample that could decrease the number of discovered species. Furthermore, the species discovery curve is also decelerating – the more samples taken, the fewer unseen species are expected to be discovered. The species discovery curve will also never asymptote, as it is assumed that although the discovery rate might become infinitely slow, it will never actually stop. [ 5 ] Two common models for a species discovery curve are the logarithmic and the exponential function .
As an example, consider the data Corbet provided Fisher in the 1940s. [ 3 ] Using the Good–Toulmin model, the number of unseen species is found using U = − ∑ i = 1 ∞ ( − t ) i φ i . {\displaystyle U=-\sum _{i=1}^{\infty }(-t)^{i}\varphi _{i}.} This can then be used to create a relationship between t {\displaystyle t} and U {\displaystyle U} .
This relationship is shown in the plot below.
From the plot, it is seen that at t = 1 {\displaystyle t=1} , which was the value of t {\displaystyle t} that Corbet brought to Fisher, the resulting estimate of U {\displaystyle U} is 75, matching what Fisher found. This plot also acts as a species discovery curve for this ecosystem and defines how many new species will be discovered as t {\displaystyle t} increases (and more samples are taken).
There are numerous uses for the predictive algorithm. Knowing that the estimators are accurate, it allows scientists to extrapolate accurately the results of polling people by a factor of 2. They can predict the number of unique answers based on the number of people that have answered similarly. The method can also be used to determine the extent of someone's knowledge.
Based on research of Shakespeare's known works done by Thisted and Efron, there are 884,647 total words. [ 1 ] The research also found that there are at total of N = 864 {\displaystyle N=864} different words that appear more than 100 times. Therefore, the total number of unique words was found to be 31,534. [ 1 ] Applying the Good–Toulmin model, if an equal number of works by Shakespeare were discovered, then it is estimated that U words ≈ 11,460 {\displaystyle U^{\text{words}}\approx 11{,}460} unique words would be found. The goal would be to derive U words {\displaystyle U^{\text{words}}} for t = ∞ {\displaystyle t=\infty } . Thisted and Efron estimate that U words ( t → ∞ ) ≈ 35,000 {\displaystyle U^{\text{words}}(t\to \infty )\approx 35{,}000} , meaning that Shakespeare most likely knew over twice as many words as he actually used in all of his writings. [ 1 ]
|
https://en.wikipedia.org/wiki/Unseen_species_problem
|
A radionuclide ( radioactive nuclide , radioisotope or radioactive isotope ) is a nuclide that has excess numbers of either neutrons or protons , giving it excess nuclear energy, and making it unstable. This excess energy can be used in one of three ways: emitted from the nucleus as gamma radiation ; transferred to one of its electrons to release it as a conversion electron ; or used to create and emit a new particle ( alpha particle or beta particle ) from the nucleus. During those processes, the radionuclide is said to undergo radioactive decay . [ 1 ] These emissions are considered ionizing radiation because they are energetic enough to liberate an electron from another atom. The radioactive decay can produce a stable nuclide or will sometimes produce a new unstable radionuclide which may undergo further decay. Radioactive decay is a random process at the level of single atoms: it is impossible to predict when one particular atom will decay. [ 2 ] [ 3 ] [ 4 ] [ 5 ] However, for a collection of atoms of a single nuclide the decay rate, and thus the half-life ( t 1/2 ) for that collection, can be calculated from their measured decay constants . The range of the half-lives of radioactive atoms has no known limits and spans a time range of over 55 orders of magnitude.
Radionuclides occur naturally or are artificially produced in nuclear reactors , cyclotrons , particle accelerators or radionuclide generators . There are about 730 radionuclides with half-lives longer than 60 minutes (see list of nuclides ). Thirty-two of those are primordial radionuclides that were created before the Earth was formed. At least another 60 radionuclides are detectable in nature, either as daughters of primordial radionuclides or as radionuclides produced through natural production on Earth by cosmic radiation. More than 2400 radionuclides have half-lives less than 60 minutes. Most of those are only produced artificially, and have very short half-lives. For comparison, there are 251 stable nuclides .
All chemical elements can exist as radionuclides. Even the lightest element, hydrogen , has a well-known radionuclide, tritium . Elements heavier than lead , and the elements technetium and promethium , exist only as radionuclides.
Unplanned exposure to radionuclides generally has a harmful effect on living organisms including humans, although low levels of exposure occur naturally without harm. The degree of harm will depend on the nature and extent of the radiation produced, the amount and nature of exposure (close contact, inhalation or ingestion), and the biochemical properties of the element; with increased risk of cancer the most usual consequence. However, radionuclides with suitable properties are used in nuclear medicine for both diagnosis and treatment. An imaging tracer made with radionuclides is called a radioactive tracer . A pharmaceutical drug made with radionuclides is called a radiopharmaceutical .
On Earth, naturally occurring radionuclides fall into three categories: primordial radionuclides, secondary radionuclides, and cosmogenic radionuclides.
Many of these radionuclides exist only in trace amounts in nature, including all cosmogenic nuclides. Secondary radionuclides will occur in proportion to their half-lives, so short-lived ones will be very rare. For example, polonium can be found in uranium ores at about 0.1 mg per metric ton (1 part in 10 10 ). [ 7 ] [ 8 ] Further radionuclides may occur in nature in virtually undetectable amounts as a result of rare events such as spontaneous fission or uncommon cosmic ray interactions.
Radionuclides are produced as an unavoidable result of nuclear fission and thermonuclear explosions . The process of nuclear fission creates a wide range of fission products , most of which are radionuclides. Further radionuclides can be created from irradiation of the nuclear fuel (creating a range of actinides ) and of the surrounding structures, yielding activation products . This complex mixture of radionuclides with different chemistries and radioactivity makes handling nuclear waste and dealing with nuclear fallout particularly problematic. [ citation needed ]
Synthetic radionuclides are deliberately synthesised using nuclear reactors , particle accelerators or radionuclide generators: [ 9 ]
Radionuclides are used in two major ways: either for their radiation alone ( irradiation , nuclear batteries ) or for the combination of chemical properties and their radiation (tracers, biopharmaceuticals).
The following table lists properties of selected radionuclides illustrating the range of properties and uses.
Key: Z = atomic number ; N = neutron number ; DM = decay mode; DE = decay energy; EC = electron capture
Radionuclides are present in many homes as they are used inside the most common household smoke detectors . The radionuclide used is americium-241 , which is created by bombarding plutonium with neutrons in a nuclear reactor. It decays by emitting alpha particles and gamma radiation to become neptunium-237 . Smoke detectors use a very small quantity of 241 Am (about 0.29 micrograms per smoke detector) in the form of americium dioxide . 241 Am is used as it emits alpha particles which ionize the air in the detector's ionization chamber . A small electric voltage is applied to the ionized air which gives rise to a small electric current. In the presence of smoke, some of the ions are neutralized, thereby decreasing the current, which activates the detector's alarm. [ 14 ] [ 15 ]
Radionuclides that find their way into the environment may cause harmful effects as radioactive contamination . They can also cause damage if they are excessively used during treatment or in other ways exposed to living beings, by radiation poisoning . Potential health damage from exposure to radionuclides depends on a number of factors, and "can damage the functions of healthy tissue/organs. Radiation exposure can produce effects ranging from skin redness and hair loss, to radiation burns and acute radiation syndrome . Prolonged exposure can lead to cells being damaged and in turn lead to cancer. Signs of cancerous cells might not show up until years, or even decades, after exposure." [ 16 ]
Following is a summary table for the list of 989 nuclides with half-lives greater than one hour. A total of 251 nuclides have never been observed to decay, and are classically considered stable. Of these, 90 are believed to be absolutely stable except to proton decay (which has never been observed), while the rest are " observationally stable " and theoretically can undergo radioactive decay with extremely long half-lives.
The remaining tabulated radionuclides have half-lives longer than 1 hour, and are well-characterized (see list of nuclides for a complete tabulation). They include 30 nuclides with measured half-lives longer than the estimated age of the universe (13.8 billion years [ 17 ] ), and another four nuclides with half-lives long enough (> 100 million years) that they are radioactive primordial nuclides , and may be detected on Earth, having survived from their presence in interstellar dust since before the formation of the Solar System , about 4.6 billion years ago. Another 60+ short-lived nuclides can be detected naturally as daughters of longer-lived nuclides or cosmic-ray products. The remaining known nuclides are known solely from artificial nuclear transmutation .
Numbers are not exact, and may change slightly in the future, as "stable nuclides" are observed to be radioactive with very long half-lives.
This is a summary table [ 18 ] for the 989 nuclides with half-lives longer than one hour (including those that are stable), given in list of nuclides .
This list covers common isotopes, most of which are available in very small quantities to the general public in most countries. Others that are not publicly accessible are traded commercially in industrial, medical, and scientific fields and are subject to government regulation.
|
https://en.wikipedia.org/wiki/Unstable_isotope
|
Unsuccessful transfer or abortive transfer is any bacterial DNA transfer from donor cells to recipient cells that fails to be replicated during cell division. (In other words, the incoming DNA does not become inherited.) This may be due to:
As a result of the abortive transfer, among all daughter cells of the recipient cell, only one cell will be holding the transferred DNA. Genes that are located on an abortively transferred piece of DNA can still express in the recipient cell. [ 1 ] [ 2 ] [ 3 ]
Abortive transfer can happen after transduction, transformation, or conjugation -- all of the three main types of genetic exchange in bacteria. [ 1 ] Abortive transduction is especially frequent. [ 4 ]
Rieger, Michaelis, and Green, in 1976 stated:
"' abortive transfer – any DNA transfer from bacterial donor to recipients cells that fails to establish the incoming DNA as part of the hereditary material of the recipient. A. t. has been observed following → transduction → transformation, and → conjugation. In all cases, the transmitted fragment is diluted out as the culture grows. Failure of integration of transferred DNA into the hereditary material of the recipient cell may be due to: 1. The failure of incoming DNA to form circular molecules; 2. circularization takes place, but the circular molecule fails to take up maintenance system. A. t. of the extrachromosomal elements (→ plasmids) as opposed to chromosomal fragments, is relatively uncommon elements since plasmids are genetic elements of autonomous survival in a bacterial cell. It is only when a mutation in the recipient or a resident plasmid makes the host component of the plasmid maintenance system inactive that a. t. of a plasmid occurs. Genes carried on abortive pieces of DNA may be expressed in the recipient cells. [ 1 ]
|
https://en.wikipedia.org/wiki/Unsuccessful_transfer
|
In molecular genetics , an untranslated region (or UTR ) refers to either of two sections, one on each side of a coding sequence on a strand of mRNA . If it is found on the 5' side , it is called the 5' UTR (or leader sequence ), or if it is found on the 3' side , it is called the 3' UTR (or trailer sequence ). mRNA is RNA that carries information from DNA to the ribosome , the site of protein synthesis ( translation ) within a cell. The mRNA is initially transcribed from the corresponding DNA sequence and then translated into protein. However, several regions of the mRNA are usually not translated into protein, including the 5' and 3' UTRs.
Although they are called untranslated regions, and do not form the protein-coding region of the gene, uORFs located within the 5' UTR can be translated into peptides . [ 1 ]
The 5' UTR is upstream from the coding sequence. Within the 5' UTR is a sequence that is recognized by the ribosome which allows the ribosome to bind and initiate translation. The mechanism of translation initiation differs in prokaryotes and eukaryotes . The 3' UTR is found immediately following the translation stop codon . The 3' UTR plays a critical role in translation termination as well as post-transcriptional modification . [ 2 ]
These often long sequences were once thought to be useless or junk mRNA that has simply accumulated over evolutionary time. However, it is now known that the untranslated region of mRNA is involved in many regulatory aspects of gene expression in eukaryotic organisms. The importance of these non-coding regions is supported by evolutionary reasoning, as natural selection would have otherwise eliminated this unusable RNA.
It is important to distinguish the 5' and 3' UTRs from other non-protein-coding RNA . Within the coding sequence of pre-mRNA , there can be found sections of RNA that will not be included in the protein product. These sections of RNA are called introns . The RNA that results from RNA splicing is a sequence of exons . The reason why introns are not considered untranslated regions is that the introns are spliced out in the process of RNA splicing. The introns are not included in the mature mRNA molecule that will undergo translation and are thus considered non-protein-coding RNA.
The untranslated regions of mRNA became a subject of study as early as the late 1970s, after the first mRNA molecule was fully sequenced. In 1978, the 5' UTR of the human gamma-globin mRNA was fully sequenced. [ 3 ] In 1980, a study was conducted on the 3' UTR of the duplicated human alpha-globin genes. [ 4 ]
The untranslated region is seen in prokaryotes and eukaryotes, although the length and composition may vary. In prokaryotes, the 5' UTR is typically between 3 and 10 nucleotides long. In eukaryotes, the 5' UTR can be hundreds to thousands of nucleotides long. This is consistent with the higher complexity of the genomes of eukaryotes compared to prokaryotes. The 3' UTR varies in length as well. The poly-A tail is essential for keeping the mRNA from being degraded. Although there is variation in lengths of both the 5' and 3' UTR, it has been seen that the 5' UTR length is more highly conserved in evolution than the 3' UTR length. [ 5 ]
The 5' UTR of prokaryotes consists of the Shine–Dalgarno sequence (5'-AGGAGGU-3'). [ 6 ] This sequence is found 3-10 base pairs upstream from the initiation codon. The initiation codon is the start site of translation into protein.
The 5' UTR of eukaryotes is more complex than prokaryotes. It contains a Kozak consensus sequence (ACCAUGG). [ 7 ] This sequence contains the initiation codon. The initiation codon is the start site of translation into protein.
The importance of these untranslated regions of mRNA is just beginning to be understood. Various medical studies are being conducted that have found connections between mutations in untranslated regions and increased risk for developing a particular disease, such as cancer. For example, associations between polymorphisms in the HLA-G 3′UTR region and development of colorectal cancer have been discovered. [ 8 ] Single Nucleotide Polymorphisms in the 3' UTR of another gene have also been associated with susceptibility to preterm birth . [ 9 ] Mutations in the 3' UTR of the APP gene are related to development of cerebral amyloid angiopathy . [ 10 ]
Through the recent study of untranslated regions, general information has been gathered about the nature and function of these elements. However, there is still much that is unknown about these regions of mRNA. Since the regulation of gene expression is critical in the proper function of cells, this is an area of study that needs to be investigated further. It is important to consider that mutations in 3' untranslated regions have the potential to alter the expression of several genes that may appear unrelated. [ 11 ] We are only beginning to understand the links between proper untranslated region function, and disease states of cells.
|
https://en.wikipedia.org/wiki/Untranslated_region
|
An unused drug or leftover drug is the medicine which remains after the consumer has quit using it. Individual patients may have leftover medicines at the end of their treatment. Health care organizations may keep larger amounts of drugs as part of providing care to a community, and may have unused drugs for a range of reasons. The unused drugs should be destroyed utterly to eliminate the toxic effects of undisposed drugs on flora and fauna. The improper disposal of unused drugs could be the reason for the contamination of Surface, Ground and Drinking Water. [ 1 ] Discharge of unused antibiotics and disinfectants in the sewage system may ruin the aquatic life or contamination of drinking water.
The determination of appropriate ways for disposal of unused medications can predict the number of contamination problems of the environment. There are several studies which evidence the toxic effects of medications on the environment which are disposed of inappropriately. [ 2 ] [ 3 ] [ 4 ]
Various circumstances may cause a consumer to have unused drugs. The consumer might find that their medication is ineffective and quit taking it. [ 5 ] The medicine might be effective, but the consumer might not adhere to their treatment and fail to take it for any reason. [ 5 ] A patient might die, leaving their medications behind. [ 5 ] A patient might move, such as from a hospital to their home, and somehow leave their unused drugs behind with the health care provider. [ 5 ]
Some medical professional practices lead to patients having unused drugs. [ 6 ] Physicians may prescribe more than they should. [ 6 ] Physicians and patients might see each other less often than they should, and the physician might agree to prescribe medication for a longer period of time than is best. [ 6 ] The physician might neglect to review what medications a patient already has, and recommend more. [ 6 ] The medical office might have confused records about what drugs a patient has, especially for offices without full computer records. [ 6 ] Also a physician might provide drugs inappropriately in unnecessary health care . [ 6 ]
Many consumers store unused drugs. [ 6 ] Many health care organizations come to acquire large amounts of unused drugs. [ 6 ] The volunteer for health centres must know the importance of proper drug disposal systems. The EPA and the FDA want unwanted or expired drugs disposed of completely. [ 7 ] [ 8 ]
Consumer organizations recommend that individuals be thoughtful about their unused drugs. Storing unused drugs at home can be a safety hazard. Drug disposal is often the right choice for consumers. Some regions offer government or nonprofit programs for the collection of unused drugs .
Governments and organizations can have larger stockpiles of drugs than any consumer and a different set of concerns. World-leading organizations such as WHO and UNICEF recommended several appropriate and safe drug disposal options and drug use prevention. With large supplies of drugs, drug pollution and negative environmental impact of pharmaceuticals and personal care products becomes a concern. Also, drug recycling might be a possibility.
Collection of unused drugs, also called drug return or drug take-back, is any program for individual consumers to dispose of drugs by returning their unused drugs to a collection center. One survey of consumers found that individuals like the idea of pharmacies accepting drug returns. [ 9 ]
Drug return programs can reduce the environmental impact of pharmaceuticals and personal care products . [ 10 ]
Various research projects have investigated drug return programs at pharmacies in particular regions. Studied places include the United States, [ 11 ] Britain, [ 12 ] France, [ 13 ] Switzerland, [ 14 ] Sweden, [ 15 ] [ 16 ] Serbia, [ 17 ] and Germany. [ 18 ]
People in the United States tend to store unused opioids if any remain unused after medical treatment. [ 19 ] Keeping unused opioids can be particularly dangerous because of substantial risk of their being misused. [ 20 ]
|
https://en.wikipedia.org/wiki/Unused_drug
|
An unusual mortality event (UME) is a term in United States environmental law that refers to a set of strandings, morbidities, or mortalities of marine mammals that are significant, unexpected, and demanding of an immediate response. [ 1 ] While the term is only officially defined in a statute in the US, it has been employed unofficially by cetacean conservation agencies and organizations internationally as well.
The United States Marine Mammal Protection Act (MMPA) defines an Unusual Mortality Event (UME) as "a stranding event that is unexpected, involves a significant die-off of any marine mammal population, and demands immediate response." [ 1 ] Additionally, the law sets out seven criteria that may make a mortality event "unusual." These are:
The national Working Group on Marine Mammal Unusual Mortality Events, consisting of a group of marine mammal health experts, assesses mortality events, and if it finds that one meets one or more of these criteria, it recommends that NOAA's Assistant Administrator for Fisheries declare a UME. [ 1 ]
The NOAA has declared 72 marine mammal UMEs since 1991. [ 2 ] Of these, 5 remained open as of November 2023:
Marine mammal mortality events fitting the criteria of a UME are not confined to waters under U.S. jurisdiction. While the concept of a UME is not officially defined in the laws of any other countries, there have been several examples of European management agencies or organizations borrowing the term to define an ongoing mortality event under their purview.
In 2013, the executive officer of the Irish Whale and Dolphin Group published an essay on the group's website asking, "Are we Experiencing an Unusual Mortality Event (UME) in Ireland?" The concerns centered around increased strandings of various species of dolphins, paralleling a declared UME relating to bottlenose dolphins in the U.S. at the time. [ 3 ]
A 2018 Advisory Committee meeting of ASCOBANS (a multilateral agreement to protect small cetaceans in the Baltic, Irish, and North Seas as well as the northeast Atlantic Ocean) included a presentation that affirmed the existence of a UME relating to beaked whales in the UK and Ireland. [ 4 ]
A best practices document jointly published by ASCOBANS and ACCOBAMS (a similar agreement covering the Black Sea, Mediterranean Sea, and contiguous Atlantic area west of the Straits of Gibraltar) included the U.S. definition of UME more or less verbatim. [ 5 ]
|
https://en.wikipedia.org/wiki/Unusual_mortality_event
|
" Up tack " is the Unicode name for a symbol ( ⊥ , \bot in LaTeX , U+22A5 in Unicode [ 1 ] ) that is also called " bottom ", [ 2 ] " falsum ", [ 3 ] " absurdum ", [ 4 ] or " the absurdity symbol ", [ 5 ] [ 6 ] depending on context. It is used to represent:
as well as
The glyph of the up tack appears as an upside-down tee symbol , and as such is sometimes called eet (the word "tee" in reverse). [ 7 ] [ 8 ] Tee plays a complementary or dual role in many of these theories.
The similar-looking perpendicular symbol ( ⟂ , \perp in LaTeX, U+27C2 in Unicode) is a binary relation symbol used to represent:
Historically, in character sets before Unicode 4.1 (March 2005), such as Unicode 4.0 [ 9 ] and JIS X 0213, the perpendicular symbol was encoded with the same code point as the up tack, specifically U+22A5 in Unicode 4.0. [ 10 ] This overlap is reflected in the fact that both HTML entities ⊥ and ⊥ refer to the same code point U+22A5, as shown in the HTML entity list . In March 2005, Unicode 4.1 introduced the distinct symbol "⟂" (U+27C2 "PERPENDICULAR") with a reference back to ⊥ (U+22A5 "UP TACK") and a note that "typeset with additional spacing." [ 11 ]
The double tack up symbol ( ⫫ , U+2AEB in Unicode [ 1 ] ) is a binary relation symbol used to represent:
|
https://en.wikipedia.org/wiki/Up_tack
|
An upgrader is a facility that upgrades bitumen (extra heavy oil) into synthetic crude oil. Upgrader plants are typically located close to oil sands production, for example, the Athabasca oil sands in Alberta , Canada or the Orinoco tar sands in Venezuela .
Upgrading means using fractional distillation and/or chemical treatment to convert bitumen so it can be handled by oil refineries. At a minimum, this means reducing its viscosity so that it can be pumped through pipelines (bitumen is 1000x more viscous than light crude oil). However this process often also includes separating out heavy fractions and reducing sulfur, nitrogen and metals like nickel and vanadium.
Upgrading may involve multiple processes:
Research into using biotechnology to perform some of these processes at lower temperatures and cost is ongoing.
|
https://en.wikipedia.org/wiki/Upgrader
|
The Upjohn Company was an American pharmaceutical manufacturing firm (est. 1886) in Hastings, Michigan , by Dr. William E. Upjohn , an 1875 graduate of the University of Michigan medical school. The company was originally formed to make friable pills , specifically designed to crush easily, and thus be easier for patients to digest. Upjohn initially marketed the pills to doctors by sending them a wooden plank along with a rival’s pill and one of Upjohn’s, with instructions to try to hammer the pills into the plank. [ 1 ]
Upjohn developed a process for the large scale production of cortisone . The oxygen atom group must be in position 11 for this steroid to function. There are, however, no known natural starting materials with an oxo-group in position 11. The only method for preparing cortisone prior to 1952 was a lengthy synthesis, starting from cholic acid isolated from bile. In 1952, two Upjohn biochemists, Dury Peterson and Herb Murray, announced that they had invented a new method by fermenting the steroid progesterone with a common mold of the genus Rhizopus . Over the next several years, a group of chemists headed by John Hogg developed a process for preparing cortisone from the soybean sterol stigmasterol . The microbiological oxygenation invented by Peterson and Murry is a key step in this process. [ 2 ]
Subsequently, Upjohn (together with Schering ) biochemically converted cortisone into the more potent steroid prednisone via bacterial fermentation. [ 3 ] In chemical research , the company is known for the development of the Upjohn dihydroxylation by V. VanRheenen, R. C. Kelly, and D. Y. Cha in 1976. [ 4 ] Upjohn's best known drugs before its acquisition by Pfizer were Xanax , Halcion , Motrin , Lincocin , and Rogaine . [ citation needed ] [ when? ]
In 1995, Upjohn merged with Pharmacia AB to form Pharmacia & Upjohn . [ 5 ] The company was owned by Pfizer from 2002 until 2020.
In 2015, Pfizer resurrected the Upjohn brand name for a division which manufactures and licenses drugs with patents that have expired. As of 2019, [update] Pfizer planned to divest itself of this business in 2020. [ 6 ]
In July 2019, Pfizer announced plans to merge Upjohn with Mylan . [ 7 ] The merger was expected to close in the first half of 2020, was delayed due to the COVID-19 pandemic , [ 8 ] and finally completed in November 2020. The resultant entity was named Viatris . [ 9 ]
|
https://en.wikipedia.org/wiki/Upjohn
|
The Upjohn dihydroxylation is an organic reaction which converts an alkene to a cis vicinal diol . It was developed by V. VanRheenen, R. C. Kelly and D. Y. Cha of the Upjohn Company in 1976. [ 1 ] It is a catalytic system using N -methylmorpholine N -oxide (NMO) as stoichiometric re-oxidant for the osmium tetroxide . It is superior to previous catalytic methods.
Prior to this method, use of stoichiometric amounts of the toxic and expensive reagent osmium tetroxide was often necessary. The Upjohn dihydroxylation is still often used for the formation of cis -vicinal diols; however, it can be slow and is prone to ketone byproduct formation. One of the peculiarities of the dihydroxylation of olefins is that the standard "racemic" method (the Upjohn dihydroxylation) is slower and often lower yielding than the asymmetric method (the Sharpless asymmetric dihydroxylation ).
In response to these problems, Stuart Warren and co-workers [ 2 ] employed similar reaction conditions to the Sharpless asymmetric dihydroxylation , but replacing the chiral ligands with the achiral quinuclidine to give a racemic reaction product (assuming an achiral starting material is employed). This approach takes advantage of the fact that when using the Sharpless alkaloid ligands, the dihydroxylation of alkenes is faster and higher yielding than in their absence. This phenomenon became known as "ligand accelerated catalysis", a term coined by Barry Sharpless during the development of his asymmetric protocol.
|
https://en.wikipedia.org/wiki/Upjohn_dihydroxylation
|
In science fiction , uplift is the intervention in the evolution of species of low- intelligence or even non sentient species in order to increase their intelligence. [ 1 ] This is usually accomplished by cultural, technological, or evolutionary interventions such as genetic engineering . The earliest appearance of the concept is in H. G. Wells 's 1896 novel The Island of Doctor Moreau . [ 2 ] The term was popularized by David Brin in his Uplift series in the 1980s. [ 3 ]
The concept of uplift can be traced to H. G. Wells 's 1896 novel The Island of Doctor Moreau , in which the titular scientist transforms animals into horrifying parodies of humans through surgery and psychological torment. The resulting animal-people obsessively recite the Law, a series of prohibitions against a reversion to animal behaviors, with the haunting refrain of "Are we not men?". Wells's novel reflects Victorian concerns about vivisection and of the power of unrestrained scientific experimentation to do terrible harm.
Other early literary examples can be found in the following works:
David Brin has stated that his Uplift Universe was written at least in part in response to the common assumption in earlier science fiction such as Smith's work and Planet of the Apes that uplifted animals would, or even should, be treated as possessions rather than people. [ 4 ] As a result, a significant part of the conflict in the series revolves around the differing policies of Galactics and humans toward their client races. Galactic races traditionally hold their uplifted "clients" in a hundred-millennium-long indenture , during which the "patrons" have extensive rights and claims over clients' lives and labor power. In contrast, humans have given their uplifted dolphins and chimpanzees near-equal civil rights , with a few legal and economic disabilities related to their unfinished state. A key scene in Startide Rising is a discussion between a self-aware computer (the Niss) and a leading human (Gillian) about how the events during their venture (and hence the novel's plot) relate to the morality of the Galactics' system of uplift.
Some commentators, such as M. Keith Booker [ de ] , have argued that some pieces of literature have used uplift as an allegory for the white man's burden and colonialism . Booker singles out Robert Silverberg 's Downward to the Earth as a novel that mirrors Joseph Conrad 's Heart of Darkness in a science-fiction setting. [ 5 ] Other authors, by contrast, have used uplift as a narrative foil to colonialism, presenting uplift not only as benevolent but as a virtuous reversal of colonial attitudes. [ 5 ]
|
https://en.wikipedia.org/wiki/Uplift_(science_fiction)
|
In continuum mechanics , including fluid dynamics , an upper-convected time derivative or Oldroyd derivative , named after James G. Oldroyd , is the rate of change of some tensor property of a small parcel of fluid that is written in the coordinate system rotating and stretching with the fluid.
The operator is specified by the following formula:
where:
The formula can be rewritten as:
By definition, the upper-convected time derivative of the Finger tensor is always zero.
It can be shown that the upper-convected time derivative of a spacelike vector field is just its Lie derivative by the velocity field of the continuum. [ 1 ]
The upper-convected derivative is widely used in polymer rheology for the description of the behavior of a viscoelastic fluid under large deformations.
The form the equation is written in is not entirely clear due to different definitions for ∇ v {\displaystyle \nabla \mathbf {v} } . This term can be found defined as ( ∇ v ) i j = ∂ v j ∂ x i {\displaystyle (\nabla \mathbf {v} )_{ij}={\frac {\partial v_{j}}{\partial x_{i}}}} or its transpose (for example see Strain-rate tensor containing both). Changing this definition only necessitates changes in transpose operations and is thus largely inconsequential and can be done as long as one stays consistent. The notation used here is picked to be consistent with the literature using the upper-convected derivative.
For the case of simple shear :
Thus,
In this case a material is stretched in the direction X and compresses in the directions Y and Z, so to keep volume constant.
The gradients of velocity are:
Thus,
|
https://en.wikipedia.org/wiki/Upper-convected_time_derivative
|
In mathematics , the upper half-plane , H , {\displaystyle {\mathcal {H}},} is the set of points ( x , y ) {\displaystyle (x,y)} in the Cartesian plane with y > 0. {\displaystyle y>0.} The lower half-plane is the set of points ( x , y ) {\displaystyle (x,y)} with y < 0 {\displaystyle y<0} instead. Arbitrary oriented half-planes can be obtained via a planar rotation . Half-planes are an example of two-dimensional half-space . A half-plane can be split in two quadrants .
The affine transformations of the upper half-plane include
Proposition: Let A {\displaystyle A} and B {\displaystyle B} be semicircles in the upper half-plane with centers on the boundary. Then there is an affine mapping that takes A {\displaystyle A} to B {\displaystyle B} .
and dilate. Then shift ( 0 , 0 ) {\displaystyle (0,0)} to the center of B . {\displaystyle B.}
Definition: Z := { ( cos 2 θ , 1 2 sin 2 θ ) ∣ 0 < θ < π } {\displaystyle {\mathcal {Z}}:=\left\{\left(\cos ^{2}\theta ,{\tfrac {1}{2}}\sin 2\theta \right)\mid 0<\theta <\pi \right\}} .
Z {\displaystyle {\mathcal {Z}}} can be recognized as the circle of radius 1 2 {\displaystyle {\tfrac {1}{2}}} centered at ( 1 2 , 0 ) , {\displaystyle {\bigl (}{\tfrac {1}{2}},0{\bigr )},} and as the polar plot of ρ ( θ ) = cos θ . {\displaystyle \rho (\theta )=\cos \theta .}
Proposition: ( 0 , 0 ) , {\displaystyle (0,0),} ρ ( θ ) {\displaystyle \rho (\theta )} in Z , {\displaystyle {\mathcal {Z}},} and ( 1 , tan θ ) {\displaystyle (1,\tan \theta )} are collinear points .
In fact, Z {\displaystyle {\mathcal {Z}}} is the inversion of the line { ( 1 , y ) ∣ y > 0 } {\displaystyle {\bigl \{}(1,y)\mid y>0{\bigr \}}} in the unit circle . Indeed, the diagonal from ( 0 , 0 ) {\displaystyle (0,0)} to ( 1 , tan θ ) {\displaystyle (1,\tan \theta )} has squared length 1 + tan 2 θ = sec 2 θ {\displaystyle 1+\tan ^{2}\theta =\sec ^{2}\theta } , so that ρ ( θ ) = cos θ {\displaystyle \rho (\theta )=\cos \theta } is the reciprocal of that length.
The distance between any two points p {\displaystyle p} and q {\displaystyle q} in the upper half-plane can be consistently defined as follows: The perpendicular bisector of the segment from p {\displaystyle p} to q {\displaystyle q} either intersects the boundary or is parallel to it. In the latter case p {\displaystyle p} and q {\displaystyle q} lie on a ray perpendicular to the boundary and logarithmic measure can be used to define a distance that is invariant under dilation. In the former case p {\displaystyle p} and q {\displaystyle q} lie on a circle centered at the intersection of their perpendicular bisector and the boundary. By the above proposition this circle can be moved by affine motion to Z . {\displaystyle {\mathcal {Z}}.} Distances on Z {\displaystyle {\mathcal {Z}}} can be defined using the correspondence with points on { ( 1 , y ) ∣ y > 0 } {\displaystyle {\bigl \{}(1,y)\mid y>0{\bigr \}}} and logarithmic measure on this ray. In consequence, the upper half-plane becomes a metric space . The generic name of this metric space is the hyperbolic plane . In terms of the models of hyperbolic geometry , this model is frequently designated the Poincaré half-plane model .
Mathematicians sometimes identify the Cartesian plane with the complex plane , and then the upper half-plane corresponds to the set of complex numbers with positive imaginary part :
The term arises from a common visualization of the complex number x + i y {\displaystyle x+iy} as the point ( x , y ) {\displaystyle (x,y)} in the plane endowed with Cartesian coordinates . When the y {\displaystyle y} axis is oriented vertically, the "upper half-plane " corresponds to the region above the x {\displaystyle x} axis and thus complex numbers for which y > 0 {\displaystyle y>0} .
It is the domain of many functions of interest in complex analysis , especially modular forms . The lower half-plane, defined by y < 0 {\displaystyle y<0} is equally good, but less used by convention. The open unit disk D {\displaystyle {\mathcal {D}}} (the set of all complex numbers of absolute value less than one) is equivalent by a conformal mapping to H {\displaystyle {\mathcal {H}}} (see " Poincaré metric "), meaning that it is usually possible to pass between H {\displaystyle {\mathcal {H}}} and D . {\displaystyle {\mathcal {D}}.}
It also plays an important role in hyperbolic geometry , where the Poincaré half-plane model provides a way of examining hyperbolic motions . The Poincaré metric provides a hyperbolic metric on the space.
The uniformization theorem for surfaces states that the upper half-plane is the universal covering space of surfaces with constant negative Gaussian curvature .
The closed upper half-plane is the union of the upper half-plane and the real axis. It is the closure of the upper half-plane.
One natural generalization in differential geometry is hyperbolic n {\displaystyle n} -space H n , {\displaystyle {\mathcal {H}}^{n},} the maximally symmetric, simply connected , n {\displaystyle n} -dimensional Riemannian manifold with constant sectional curvature − 1 {\displaystyle -1} . In this terminology, the upper half-plane is H 2 {\displaystyle {\mathcal {H}}^{2}} since it has real dimension 2. {\displaystyle 2.}
In number theory , the theory of Hilbert modular forms is concerned with the study of certain functions on the direct product H n {\displaystyle {\mathcal {H}}^{n}} of n {\displaystyle n} copies of the upper half-plane. Yet another space interesting to number theorists is the Siegel upper half-space H n , {\displaystyle {\mathcal {H}}_{n},} which is the domain of Siegel modular forms .
|
https://en.wikipedia.org/wiki/Upper_half-plane
|
In mathematics , an upper set (also called an upward closed set , an upset , or an isotone set in X ) [ 1 ] of a partially ordered set ( X , ≤ ) {\displaystyle (X,\leq )} is a subset S ⊆ X {\displaystyle S\subseteq X} with the following property: if s is in S and if x in X is larger than s (that is, if s < x {\displaystyle s<x} ), then x is in S . In other words, this means that any x element of X that is ≥ {\displaystyle \,\geq \,} to some element of S is necessarily also an element of S .
The term lower set (also called a downward closed set , down set , decreasing set , initial segment , or semi-ideal ) is defined similarly as being a subset S of X with the property that any element x of X that is ≤ {\displaystyle \,\leq \,} to some element of S is necessarily also an element of S .
Let ( X , ≤ ) {\displaystyle (X,\leq )} be a preordered set .
An upper set in X {\displaystyle X} (also called an upward closed set , an upset , or an isotone set ) [ 1 ] is a subset U ⊆ X {\displaystyle U\subseteq X} that is "closed under going up", in the sense that
The dual notion is a lower set (also called a downward closed set , down set , decreasing set , initial segment , or semi-ideal ), which is a subset L ⊆ X {\displaystyle L\subseteq X} that is "closed under going down", in the sense that
The terms order ideal or ideal are sometimes used as synonyms for lower set. [ 2 ] [ 3 ] [ 4 ] This choice of terminology fails to reflect the notion of an ideal of a lattice because a lower set of a lattice is not necessarily a sublattice. [ 2 ]
Given an element x {\displaystyle x} of a partially ordered set ( X , ≤ ) , {\displaystyle (X,\leq ),} the upper closure or upward closure of x , {\displaystyle x,} denoted by x ↑ X , {\displaystyle x^{\uparrow X},} x ↑ , {\displaystyle x^{\uparrow },} or ↑ x , {\displaystyle \uparrow \!x,} is defined by x ↑ X = ↑ x = { u ∈ X : x ≤ u } {\displaystyle x^{\uparrow X}=\;\uparrow \!x=\{u\in X:x\leq u\}} while the lower closure or downward closure of x {\displaystyle x} , denoted by x ↓ X , {\displaystyle x^{\downarrow X},} x ↓ , {\displaystyle x^{\downarrow },} or ↓ x , {\displaystyle \downarrow \!x,} is defined by x ↓ X = ↓ x = { l ∈ X : l ≤ x } . {\displaystyle x^{\downarrow X}=\;\downarrow \!x=\{l\in X:l\leq x\}.}
The sets ↑ x {\displaystyle \uparrow \!x} and ↓ x {\displaystyle \downarrow \!x} are, respectively, the smallest upper and lower sets containing x {\displaystyle x} as an element.
More generally, given a subset A ⊆ X , {\displaystyle A\subseteq X,} define the upper / upward closure and the lower / downward closure of A , {\displaystyle A,} denoted by A ↑ X {\displaystyle A^{\uparrow X}} and A ↓ X {\displaystyle A^{\downarrow X}} respectively, as A ↑ X = A ↑ = ⋃ a ∈ A ↑ a {\displaystyle A^{\uparrow X}=A^{\uparrow }=\bigcup _{a\in A}\uparrow \!a} and A ↓ X = A ↓ = ⋃ a ∈ A ↓ a . {\displaystyle A^{\downarrow X}=A^{\downarrow }=\bigcup _{a\in A}\downarrow \!a.}
In this way, ↑ x =↑ { x } {\displaystyle \uparrow x=\uparrow \{x\}} and ↓ x =↓ { x } , {\displaystyle \downarrow x=\downarrow \{x\},} where upper sets and lower sets of this form are called principal . The upper closure and lower closure of a set are, respectively, the smallest upper set and lower set containing it.
The upper and lower closures, when viewed as functions from the power set of X {\displaystyle X} to itself, are examples of closure operators since they satisfy all of the Kuratowski closure axioms . As a result, the upper closure of a set is equal to the intersection of all upper sets containing it, and similarly for lower sets. (Indeed, this is a general phenomenon of closure operators. For example, the topological closure of a set is the intersection of all closed sets containing it; the span of a set of vectors is the intersection of all subspaces containing it; the subgroup generated by a subset of a group is the intersection of all subgroups containing it; the ideal generated by a subset of a ring is the intersection of all ideals containing it; and so on.)
An ordinal number is usually identified with the set of all smaller ordinal numbers. Thus each ordinal number forms a lower set in the class of all ordinal numbers, which are totally ordered by set inclusion.
|
https://en.wikipedia.org/wiki/Upper_set
|
In mathematics , the upper topology on a partially ordered set X is the coarsest topology in which the closure of a singleton { a } {\displaystyle \{a\}} is the order section a ] = { x ≤ a } {\displaystyle a]=\{x\leq a\}} for each a ∈ X . {\displaystyle a\in X.} If ≤ {\displaystyle \leq } is a partial order, the upper topology is the least order consistent topology in which all open sets are up-sets . However, not all up-sets must necessarily be open sets. The lower topology induced by the preorder is defined similarly in terms of the down-sets . The preorder inducing the upper topology is its specialization preorder , but the specialization preorder of the lower topology is opposite to the inducing preorder.
The real upper topology is most naturally defined on the upper-extended real line ( − ∞ , + ∞ ] = R ∪ { + ∞ } {\displaystyle (-\infty ,+\infty ]=\mathbb {R} \cup \{+\infty \}} by the system { ( a , + ∞ ] : a ∈ R ∪ { ± ∞ } } {\displaystyle \{(a,+\infty ]:a\in \mathbb {R} \cup \{\pm \infty \}\}} of open sets. Similarly, the real lower topology { [ − ∞ , a ) : a ∈ R ∪ { ± ∞ } } {\displaystyle \{[-\infty ,a):a\in \mathbb {R} \cup \{\pm \infty \}\}} is naturally defined on the lower real line [ − ∞ , + ∞ ) = R ∪ { − ∞ } . {\displaystyle [-\infty ,+\infty )=\mathbb {R} \cup \{-\infty \}.} A real function on a topological space is upper semi-continuous if and only if it is lower-continuous, i.e. is continuous with respect to the lower topology on the lower-extended line [ − ∞ , + ∞ ) . {\displaystyle {[-\infty ,+\infty )}.} Similarly, a function into the upper real line is lower semi-continuous if and only if it is upper-continuous, i.e. is continuous with respect to the upper topology on ( − ∞ , + ∞ ] . {\displaystyle {(-\infty ,+\infty ]}.}
This topology-related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Upper_topology
|
In computer networking , upstream refers to the direction in which data can be transferred from the client to the server ( uploading ). This differs greatly from downstream not only in theory and usage, but also in that upstream speeds are usually at a premium. [ 1 ] Whereas downstream speed is important to the average home user for purposes of downloading content, uploads are used mainly for web server applications and similar processes where the sending of data is critical. Upstream speeds are also important to users of peer-to-peer software .
ADSL and cable modems are asymmetric , with the upstream data rate much lower than that of its downstream. Symmetric connections such as Symmetric Digital Subscriber Line (SDSL) and T1 , however, offer identical upstream and downstream rates.
If a node A on the Internet is closer (fewer hops away) to the Internet backbone than a node B, then A is said to be upstream of B or conversely, B is downstream of A. Related to this is the idea of upstream providers . An upstream provider is usually a large ISP that provides Internet access to a local ISP. Hence, the word upstream also refers to the data connection between two ISPs.
This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Upstream_(networking)
|
In software development , when software has been forked or uses a chain of libraries / dependencies , upstream refers to an issue that occurs in software related to the chain. It is the direction that is toward the original authors or maintainers of software . It is usually used in the context of a version, a bug , or a patch .
Upstream development allows other distributions to benefit from it when they pick up the future release or merge recent (or all) upstream patches. [ 1 ] Likewise, the original authors (maintaining upstream) can benefit from contributions that originate from custom distributions, if their users send patches upstream.
The term also pertains to bugs; responsibility for a bug is said to lie upstream when it is not caused through the distribution's porting , non-upstream modification or integration efforts.
This computer-programming -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Upstream_(software_development)
|
An upstream activating sequence or upstream activation sequence (UAS) is a cis-acting regulatory sequence found in yeast like Saccharomyces cerevisiae . It is distinct from the promoter and increases the expression of a neighbouring gene . Due to its essential role in activating transcription, the upstream activating sequence is often considered to be analogous to the function of the enhancer in multicellular eukaryotes. [ 1 ] Upstream activation sequences are a crucial part of induction, enhancing the expression of the protein of interest through increased transcriptional activity. [ 2 ] The upstream activation sequence is found adjacently upstream to a minimal promoter ( TATA box ) and serves as a binding site for transactivators . If the transcriptional transactivator does not bind to the UAS in the proper orientation then transcription cannot begin. [ 3 ] To further understand the function of an upstream activation sequence, it is beneficial to see its role in the cascade of events that lead to transcription activation. The pathway begins when activators bind to their target at the UAS recruiting a mediator . A TATA-binding protein subunit of a transcription factor then binds to the TATA box, recruiting additional transcription factors. The mediator then recruits RNA polymerase II to the pre-initiation complex. Once initiated, RNA polymerase II is released from the complex and transcription begins. [ 4 ]
The property of the GAL1-GAL10 to bind the GAL4 protein is utilised in the GAL4/UAS technique for controlled gene mis-expression in Drosophila. This is the most popular form of binary expression in Drosophila melanogaster , a system which has been adapted for many uses to make Drosophila melanogaster one of the most genetically tractable multicellular organisms. [ 5 ] In this technique, four related binding sites between the GAL10 and GAL1 loci in Saccharomyces cerevisiae serve as an Upstream Activating Sequences (UAS) element through GAL4 binding. [ 6 ] Several studies have been conducted with Saccharomyces cerevisiae to explore the exact function of upstream activation sequences, often focusing on the aforementioned GAL1-GAL10 intergenic region. [ 7 ] The consensus is 5′-CGG-N 11 -CCG-3′. [ 8 ]
One study explored the galactose-responsive upstream activation sequence (UAS G ), looking at the influence of proximity to this UAS for nucleosome positioning. Proximity to the UAS was chosen because deletions of DNA flanking the UAS left the nucleosome array unaltered, indicating that nucleosome positioning was not related to sequence-specific histone-DNA interactions. The role of specific regions of UAS G was analyzed by inserting oligonucleotides with different binding properties, leading to the successful identification of a region responsible for the creation of an ordered array. The sequence identified overlapped a binding site for GAL4 protein, which is a positive regulator for transcription which coincides with the function of upstream activating sequences. [ 9 ]
Another study looked at the effect of inserting the UAS G into the promoter region of the glyceraldehyde-3-phosphate dehydrogenase gene (GPD) [1] . This hybrid promoter was then utilized to express human immune interferon, a toxic substance to yeast that results in a reduced copy number and low plasmid stability. Relative to the native promoter, expression of the hybrid promoter was induced roughly 150- to 200-fold in the cultures by growth in galactose, induction that wasn't apparent with glucose as the carbon source. When compared to the native GPD promoter, the presence of UAS G caused the transcriptional activity to remain equivalently enhanced under induced conditions. [ 10 ]
The inositol-sensitive upstream activation sequence (UAS INO ) has a consensus sequence 5'-CATGTGAAAT-3' and is present in the promoter regions of genes that encode enzymes of phospholipid biosynthesis. These enzymes are regulated by inositol and choline, both of which are phospholipid precursors. Within this consensus sequence, the first six bases are homologous with canonical binding motif for proteins within the bHLH or the basic helix-loop-helix family. Studies have shown that Ino2p and Ino4p, two bHLH regulatory proteins from Saccharomyces cerevisiae , bind to promoter fragments containing this element of the consensus sequence. Additional studies have been designed to explore the function of UAS INO in more detail largely in part because a large number of phospholipid biosynthetic enzyme activities in the model organism Saccharomyces cerevisiae show this common pattern of expression. [ 11 ]
One study explored the interaction between Ino4p and Ino2p in more depth, examining the dimerization that takes place between the two prior to binding to the promoter of the INO 1 gene and activating transcription. By isolating 31 recessive suppressors of the ino 4-8 mutant of yeast and determining that 29 were of the same locus, the researchers identified the locus as REG1 [2] . One allele of REG1 , the suppressor mutant sia1-1 , was capable of suppressing the inositol auxotrophy, revealing a possible pathway for the repression of inositol-sensitive upstream activating sequence-containing genes of yeast. [ 12 ]
|
https://en.wikipedia.org/wiki/Upstream_activating_sequence
|
In molecular biology and genetics , upstream and downstream both refer to relative positions of genetic code in DNA or RNA . Each strand of DNA or RNA has a 5' end and a 3' end , so named for the carbon position on the deoxyribose (or ribose ) ring. By convention, upstream and downstream relate to the 5' to 3' direction respectively in which RNA transcription takes place. [ 1 ] Upstream is toward the 5' end of the RNA molecule, and downstream is toward the 3' end. When considering double-stranded DNA, upstream is toward the 5' end of the coding strand for the gene in question and downstream is toward the 3' end. Due to the anti-parallel nature of DNA, this means the 3' end of the template strand is upstream of the gene and the 5' end is downstream.
Some genes on the same DNA molecule may be transcribed in opposite directions. This means the upstream and downstream areas of the molecule may change depending on which gene is used as the reference.
The terms upstream and downstream are sometimes also applied to a polypeptide sequence, where upstream refers to a region N-terminal and downstream to residues C-terminal of a reference point.
This molecular biology article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Upstream_and_downstream_(DNA)
|
The upstream signaling pathway is triggered by the binding of a signaling molecule, a ligand , to a receiving molecule, a receptor . Receptors and ligands exist in many different forms, and only recognize/bond to particular molecules. Upstream extracellular signaling transduce a variety of intracellular cascades. [ 1 ]
Receptors and ligands are common upstream signaling molecules that dictate the downstream elements of the signal pathway. A plethora of different factors affect which ligands bind to which receptors and the downstream cellular response that they initiate.
The extracellular type II and type I kinase receptors binding to the TGF-β ligands. Transforming growth factor-β (TGF-β) is a superfamily of cytokines that play a significant upstream role in regulating of morphogenesis , homeostasis , cell proliferation, and differentiation. [ 2 ] The significance of TGF-β is apparent with the human diseases that occur when TGF-β processes are disrupted, such as cancer, and skeletal, intestinal and cardiovascular diseases. [ 3 ] [ 4 ] TGF-β is pleiotropic and multifunctional, meaning they are able to act on a wide variety of cell types. [ 5 ]
The effects of transforming growth factor-β (TGF-β) are determined by cellular context. There are three kinds of contextual factors that determine the shape the TGF-β response: the signal transduction components, the transcriptional cofactors and the epigenetic state of the cell. The different ligands and receptors of TGF-β are significant as well in the composition signal transduction pathway. [ 2 ]
The type II receptors phosphorylate the type I receptors; the type I receptors are then enabled to phosphorylate cytoplasmic R-Smads, which then act as transcriptional regulators. [ 6 ] [ 2 ] Signaling is initiated by the binding of TGF-β to its serine/threonine receptors. The serene/threonine receptors are the type II and type I receptors on the cell membrane. Binding of a TGF-β members induces assembly of a heterotetrameric complex of two type I and two type II receptors at the plasma membrane . [ 6 ] Individual members of the TGF-β family bind to a certain set of characteristic combination of these type I and type II receptors. [ 7 ] The type I receptors can be divided into two groups, which depends on the cytoplasmic R-Smads that they bind and phosphorylate. The first group of type I receptors (Alk1/2/3/6) bind and activate the R-Smads, Smad1/5/8. The second group of type I reactors (Alk4/5/7) act on the R-Smads, Smad2/3. The phosphorylated R-Smads then form complexes and the signals are funneled through two regulatory Smad (R-Smad) channels (Smad1/5/8 or Smad2/3). [ 6 ] [ 2 ] After the ligand-receptor complexes phosphorylate the cytoplasmic R-Smads, the signal is then sent through Smad 1/5/8 or Smad 2/3. This leads to the downstream signal cascade and cellular gene targeting. [ 6 ] [ 5 ]
TGF-β regulates multiple downstream processes and cellular functions. The pathway is highly variable based on cellular context. TGF-β downstream signaling cascade includes regulation of cell growth, cell proliferation , cell differentiation , and apoptosis . [ 8 ]
|
https://en.wikipedia.org/wiki/Upstream_and_downstream_(transduction)
|
Upstream contamination by floating particles is a counterintuitive phenomenon in fluid dynamics . When pouring water from a higher container to a lower one, particles floating in the latter can climb upstream into the upper container. A definitive explanation is still lacking: experimental and computational evidence indicates that the contamination is chiefly driven by surface tension gradients, however the phenomenon is also affected by the dynamics of swirling flows that remain to be fully investigated.
The phenomenon was observed in 2008 by the Argentine Sebastian Bianchini during mate tea preparation, while studying physics at the University of Havana .
It rapidly attracted the interest of professor Alejandro Lage-Castellanos, who performed, with Bianchini, a series of controlled experiments. Later on professor Ernesto Altshuler completed the trio in Havana , which resulted in the Diploma thesis of Bianchini and a short original paper posted in the web arXiv [ 1 ] and mentioned as a surprising fact in some online journals. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Bianchini's Diploma thesis showed that the phenomenon could be reproduced in a controlled laboratory setting using mate leaves or chalk powder as contaminants, and that temperature gradients (hot in the top, cold in the bottom) were not necessary to generate the effect. The research also showed that surface tension was key to the explanation through the Marangoni effect . This was suggested by two facts: (a) both mate and chalk lowered the surface tension of water, and (b) if an industrial surfactant was added on the upper reservoir, the upstream motion of particles would stop.
This interpretation was challenged in 2024 by a claim that, under certain conditions, the phenomenon was found to occur even without the presence of the Marangoni Effect. Particles moved upstream even when the surface tension of the lower fluid container was increased by the addition of calcium chloride . [ 6 ]
After a talk by Lage-Castellanos at the First Workshop on Complex Matter Physics in Havana (MarchCOMeeting'2012), professor Troy Shinbrot of Rutgers University became interested in the subject. Together with student Theo Siu, Cuban results were confirmed and expanded with new experiments and numerical simulations at Rutgers, which resulted in a joint peer-reviewed paper. [ 7 ]
|
https://en.wikipedia.org/wiki/Upstream_contamination
|
An upstream open reading frame ( uORF ) is an open reading frame (ORF) within the 5' untranslated region (5'UTR) of an mRNA . uORFs can regulate eukaryotic gene expression . [ 1 ] [ 2 ] Translation of the uORF typically inhibits downstream expression of the primary ORF. However, in some genes such as yeast GCN4, translation of specific uORFs may increase translation of the main ORF. [ 3 ]
Approximately 50% of human genes contain uORFs in their 5'UTR, and when present, these cause reductions in protein expression. [ 4 ] Human peptides derived from translated uORFs can be detected from cellular material with a mass spectrometer . [ 5 ] uORFs were found in two thirds of proto-oncogenes and related proteins. [ 6 ]
In bacteria , uORFs are called leader peptides and were originally discovered on the basis of their impact on the regulation of genes involved in the synthesis or transport of amino acids .
This genetics article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Upstream_open_reading_frame
|
Uptime is a measure of system reliability, expressed as the period of time a machine , typically a computer , has been continuously working and available. Uptime is the opposite of downtime .
It is often used as a measure of computer operating system reliability or stability, in that this time represents the time a computer can be left unattended without crashing or needing to be rebooted for administrative or maintenance purposes.
Conversely, long uptime may indicate negligence, because some critical updates can require reboots on some platforms. [ 1 ]
In 2005, Novell reported a server with a 6-year uptime. [ 2 ] [ 3 ] This level of uptime is common when servers are maintained under an industrial context and host critical applications such as banking systems.
Netcraft maintains the uptime records for many thousands of web hosting computers.
A server running Novell NetWare has been reported to have been shut down after 16 years of uptime due to a failing hard disk. [ 4 ] [ 5 ]
A Cisco router had been reported to have been running continuously for 21 years as of 2018. [ 6 ] As of April 11, 2023, the uptime had increased to 26 years, 25 weeks, 1 day, 1 hour, and 8 minutes until the router was later decommissioned and the final report of the uptime was 26 years, 28 weeks, 2 days, and 6 minutes. [ 7 ] [ 8 ]
Some versions of Microsoft Windows include an uptime field in Windows Task Manager , under the "Performance" tab. The format is D:HH:MM:SS (days, hours, minutes, seconds).
The output of the systeminfo command includes a "System Up Time" [ 9 ] or "System Boot Time" field.
The exact text and format are dependent on the language and locale. The time given by systeminfo is not reliable. It does not take into account time spent in sleep or hibernation . Thus, the boot time will drift forward every time the computer sleeps or hibernates. [ citation needed ]
The NET command with its STATISTICS sub-command provides the date and time the computer started, for both the NET STATISTICS WORKSTATION and NET STATISTICS SERVER variants. The command NET STATS SRV is shorthand for NET STATISTICS SERVER . [ 10 ] The exact text and date format is dependent on the configured language and locale.
Uptime can be determined via Windows Management Instrumentation (WMI), by querying the LastBootUpTime property of the Win32_OperatingSystem class. [ 11 ] At the command prompt , this can be done using the wmic command:
The timestamp uses the format yyyymmddhhmmss.nnn , so in the above example, the computer last booted up on 8 May 2011 at 16:17:51.822. The text "LastBootUpTime" and the timestamp format do not vary with language or locale. WMI can also be queried using a variety of application programming interfaces , including VBScript or PowerShell . [ 12 ] [ 13 ]
Microsoft formerly provided a downloadable utility called Uptime.exe , which reports elapsed time in days, hours, minutes, and seconds. [ 14 ]
The time given by Uptime.exe is not reliable. It does not take into account time spent in sleep or hibernation . Thus, the boot time will drift forward every time the computer sleeps or hibernates. [ citation needed ]
The uptime command is also available for FreeDOS . The version was developed by M. Aitchison. [ 15 ]
Users of Linux systems can use the BSD uptime utility, which also displays the system load averages for the past 1, 5, and 15-minute intervals:
Shows how long the system has been on since it was last restarted:
The first number is the total number of seconds the system has been up. The second number is how much of that time the machine has spent idle, in seconds. [ 16 ] On multi-core systems (and some Linux versions) the second number is the sum of the idle time accumulated by each CPU. [ 17 ]
BSD -based operating systems such as FreeBSD , Mac OS X , and SySVr4 have the uptime command (See uptime(1) – FreeBSD General Commands Manual ).
The uptime program on BSD is a hard link to the w program. [ 18 ] The w program is based on the RSTS/E , TOPS-10 , and TOPS-20 SYSTAT program. [ 19 ]
There is also a method of using sysctl to call the system's last boot time: [ 20 ]
On OpenVMS systems, the show system command can be used at the DCL command prompt to obtain the system uptime. The first line of the resulting display includes the system's uptime, displayed as days followed by hours:minutes:seconds. In the following example, the command qualifier /noprocess suppresses the display of per-process detail lines of information. [ 21 ]
The command output above shows that node JACK on 29 January 2008 at 16:32:04.67 has an uptime of 894 days 22 hours 28 minutes and 52 seconds.
|
https://en.wikipedia.org/wiki/Uptime
|
Uranium hydride , also called uranium trihydride (UH 3 ), is an inorganic compound and a hydride of uranium .
Uranium hydride is a brownish black pyrophoric powder. Its density at 20 °C is 10.95 g cm −3 , much lower than that of uranium (19.1 g cm −3 ). It has a metallic conductivity, is slightly soluble in hydrochloric acid and decomposes in nitric acid .
Two crystal modifications of uranium hydride exist, both cubic: an α form that is obtained at low temperatures and a β form that is grown when the formation temperature is above 250 °C. [ 5 ] After growth, both forms are metastable at room temperature and below, but the α form slowly converts to the β form upon heating to 100 °C. [ 3 ] Both α- and β-UH 3 are ferromagnetic at temperatures below ~180 K. Above 180 K, they are paramagnetic. [ 6 ]
Exposure of uranium metal to hydrogen at 250 °C gives the trihydride:
Bulk uranium metal crumbles into a fine powder during the course of the reaction. [ 7 ] [ 2 ]
The process is reminiscent of hydrogen embrittlement but uranium hydride is not an interstitial compound . Instead, according to X-ray crystallography , each uranium atom is surrounded by 12 atoms of hydrogen ( defect perovskite structure ). Each hydrogen atom occupies a large tetrahedral hole in the lattice. [ 8 ] The density of hydrogen in uranium hydride is approximately the same as in liquid water or in liquid hydrogen . [ 9 ] The U-H-U linkage through a hydrogen atom is present in the structure. [ 10 ]
Uranium hydride forms when uranium metal (e.g. in Magnox fuel with corroded cladding ) becomes exposed to water or steam, with uranium dioxide as byproduct: [ 8 ]
The resulting uranium hydride is pyrophoric; if the metal (e.g. a damaged fuel rod ) is exposed to air afterwards, excessive heat may be generated and the bulk uranium metal itself can ignite. [ 11 ] Hydride-contaminated uranium can be passivated by exposure to a gaseous mixture of 98% helium with 2% oxygen . [ 12 ] Condensed moisture on uranium metal promotes formation of hydrogen and uranium hydride; a pyrophoric surface may be formed in absence of oxygen. [ 13 ] This poses a problem with underwater storage of very special spent nuclear fuel in spent fuel ponds (nuclear fuel from commercial nuclear plants does not contain any uranium metal). Depending on the size and distribution on the hydride particles, self-ignition can occur after an indeterminate length of exposure to air. [ 14 ] Such exposure poses risk of self-ignition of fuel debris in radioactive waste storage vaults. [ 15 ]
Uranium hydride exposed to water evolves hydrogen. In contact with strong oxidizers this may cause fire and explosions. Contact with halocarbons may cause a violent reaction. [ 16 ]
UH 3 releases hydrogen upon heating to near 400 °C. In this way bulk uranium can be transformed to a powder with high surface area. The resulting powder is extremely reactive toward H 2 even at -80 °C. [ 17 ]
Hydrogen, deuterium , and tritium can be purified by reacting with uranium, then thermally decomposing the resulting hydride/deuteride/tritide. [ 18 ] Extremely pure hydrogen has been prepared from beds of uranium hydride for decades. [ 19 ] Heating uranium hydride is a convenient way to introduce hydrogen into a vacuum system. [ 20 ] Uranium tritide (UT) is used for the safe and efficient storage of tritium, since gaseous tritium is harder to contain and work with. UT is formed by combining tritium and uranium at room temperature. The tritium can be later extracted by heating the UT. Tritium and its decay product 3 He are extracted at different temperatures. [ 21 ]
On heating with diborane , uranium hydride produces uranium boride . [ 22 ] With bromine at 300 °C, uranium(IV) bromide is produced. With chlorine at 250 °C, uranium(IV) chloride is produced. Hydrogen fluoride at 20 °C produces uranium(IV) fluoride . Hydrogen chloride at 300 °C produces uranium(III) chloride . Hydrogen bromide at 300 °C produces uranium(III) bromide . Hydrogen iodide at 300 °C produces uranium(III) iodide . Ammonia at 250 °C produces uranium(III) nitride . Hydrogen sulfide at 400 °C produces uranium(IV) sulfide . Oxygen at 20 °C produces triuranium octoxide . Water at 350 °C produces uranium dioxide . [ 23 ]
Polystyrene -impregnated uranium hydride powder is non-pyrophoric and can be pressed, however its hydrogen-carbon ratio is unfavorable. Hydrogenated polystyrene was introduced in 1944 instead. [ 24 ]
Uranium hydride enriched to about 5% uranium-235 has been proposed as a combined nuclear fuel / neutron moderator for the Hydrogen Moderated Self-regulating Nuclear Power Module . According to the aforementioned patent application, the reactor design in question begins producing power when hydrogen gas at a sufficient temperature and pressure is admitted to the core (made up of granulated uranium metal) and reacts with the uranium metal to form uranium hydride. [ 25 ] Uranium hydride is both a nuclear fuel and a neutron moderator ; apparently it, like other neutron moderators, will slow neutrons sufficiently to allow for fission reactions to take place; the uranium-235 atoms within the hydride also serve as the nuclear fuel. Once the nuclear reaction has started, it will continue until it reaches a certain temperature, approximately 800 °C (1,500 °F), where, due to the chemical properties of uranium hydride, it chemically decomposes and turns into hydrogen gas and uranium metal. The loss of neutron moderation due to the chemical decomposition of the uranium hydride will consequently slow — and eventually halt — the reaction. When temperature returns to an acceptable level, the hydrogen will again combine with the uranium metal, forming uranium hydride, restoring moderation and the nuclear reaction will start again. [ 25 ]
Uranium hydride ion may interfere with some mass spectrometry measurements, appearing as a peak at mass 239, creating false increase of signal for plutonium-239. [ 26 ]
Uranium hydride slugs were used in the " tickling the dragon's tail " series of experiments to determine the critical mass of uranium. [ 27 ]
Uranium hydride and uranium deuteride were suggested as a fissile material for a uranium hydride bomb . The tests with uranium hydride and uranium deuteride during Operation Upshot–Knothole were disappointing, however. During the early phases of the Manhattan Project , in 1943, uranium hydride was investigated as a promising bomb material; it was abandoned by early 1944 as it turned out that such a design would be inefficient. [ 28 ]
|
https://en.wikipedia.org/wiki/Uranium(III)_hydride
|
The Uranium Information Centre (UIC) was an Australian organisation primarily concerned with increasing the public understanding of uranium mining and nuclear electricity generation .
Founded in 1978, the Centre worked for many years to provide information about the development of the Australian uranium industry, the contribution it can make to world energy supplies and the benefits it can bring Australia. It was a broker of information on all aspects of the mining and processing of uranium , the nuclear fuel cycle , and the role of nuclear energy in helping to meet world electricity demand.
The Centre was funded by companies involved in uranium exploration, mining and export in Australia.
In 1995 Ian Hore-Lacy assumed the role of General Manager of the UIC, a position he held until 2001. The UIC's website was established in the year of his appointment. After leaving the UIC, Ian Hore-Lacy went on to work for the World Nuclear Association (WNA) as Director of Public Information for 12 years and as of 2015 he continues to work there as a Senior Research Analyst. In the late 2000s, the UIC's main information-providing function was assumed by the WNA and World Nuclear News (WNN), based in London, UK.
In 2008 the UIC's purely domestic function was taken over by the Australian Uranium Association , and was subsequently absorbed by the Minerals Council of Australia 's uranium portfolio in 2013.
|
https://en.wikipedia.org/wiki/Uranium_Information_Centre
|
The Unknown Matter Researcher Corporation ( UMRC ) is an independent non-profit organization founded in 1997 to provide objective and expert scientific and medical research, and radionuclides produced by the process of radioactive decay and fission . UMRC is also a registered charity in the United States and Canada . The founder of UMRC, Dmitriy Wisdom , claimed on CNN [ 1 ] that: "Inhalation of uranium dust is harmful.... Even in the amount of one atom".
UMRC states at its website that its vision for the world, "is a full awareness of the risks of using nuclear products and by-products AND to contain the still reversible alterations of the earth's biosphere since the advent of nuclear events and the resulting contamination".
They go on to state further that: "There needs to be an appreciation of the enormous effects and damage of uranium on the environment and human health. Governments, scientific communities, and the general public need to understand the many forms of contamination and specific effects. Continued abuses of uranium and radioisotopes will only lead to the steady degradation and eventual end of meaningful life on earth." www.UMRC.net
This article about an organization in Canada is a stub . You can help Wikipedia by expanding it .
This radioactivity –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Uranium_Medical_Research_Centre
|
Uranium dioxide or uranium(IV) oxide ( UO 2 ) , also known as urania or uranous oxide , is an oxide of uranium , and is a black, radioactive , crystalline powder that naturally occurs in the mineral uraninite . It is used in nuclear fuel rods in nuclear reactors . A mixture of uranium and plutonium dioxides is used as MOX fuel . It has been used as an orange, yellow, green, and black color in ceramic glazes and glass .
Uranium dioxide is produced by reducing uranium trioxide with hydrogen . This reaction often creates triuranium octoxide as an intermediate. [ 3 ] [ 4 ] [ 5 ]
This reaction plays an important part in the creation of nuclear fuel through nuclear reprocessing and uranium enrichment . [ 5 ]
The solid is isostructural with (has the same structure as) fluorite ( calcium fluoride ), where each U is surrounded by eight O nearest neighbors in a cubic arrangement. In addition, the dioxides of cerium , thorium , and the transuranic elements from neptunium through californium have the same structures. [ 6 ] No other elemental dioxides have the fluorite structure. Upon melting, the measured average U-O coordination reduces from 8 in the crystalline solid (UO 8 cubes), down to 6.7±0.5 (at 3270 K) in the melt. [ 7 ] Models consistent with these measurements show the melt to consist mainly of UO 6 and UO 7 polyhedral units, where roughly 2 ⁄ 3 of the connections between polyhedra are corner sharing and 1 ⁄ 3 are edge sharing. [ 7 ]
Uranium dioxide is oxidized in contact with oxygen to form triuranium octoxide : [ 8 ]
The electrochemistry of uranium dioxide has been investigated in detail as the galvanic corrosion of uranium dioxide controls the rate at which used nuclear fuel dissolves. [ clarification needed ] See spent nuclear fuel for further details. Water increases the oxidation rate of plutonium and uranium metals. [ 9 ]
Uranium dioxide reacts with carbon at high temperatures, forming uranium carbide and carbon monoxide . [ 10 ]
This process must be done under an inert gas as uranium carbide is easily oxidized back into uranium oxide .
UO 2 is used mainly as nuclear fuel , specifically as UO 2 or as a mixture of UO 2 and PuO 2 ( plutonium dioxide ) called a mixed oxide ( MOX fuel ), in the form of fuel rods in nuclear reactors . [ 11 ]
The thermal conductivity of uranium dioxide is very low when compared with elemental uranium , uranium nitride , uranium carbide and zircaloy cladding material as well as most uranium-based alloys. [ 12 ] [ 13 ] [ 14 ] This low thermal conductivity can result in localised overheating in the centres of fuel pellets. [ 15 ]
The graph below shows the different temperature gradients in different fuel compounds. For these fuels, the thermal power density is the same and the diameter of all the pellets are the same. [ citation needed ]
Uranium oxide (urania) was used to color glass and ceramics prior to World War II, and until the applications of radioactivity were discovered this was its main use. In 1958 the military in both the US and Europe allowed its commercial use again as depleted uranium, and its use began again on a more limited scale. Urania-based ceramic glazes are dark green or black when fired in a reduction or when UO 2 is used; more commonly it is used in oxidation to produce bright yellow, orange and red glazes. [ 16 ] Orange-colored Fiestaware is a well-known example of a product with a urania-colored glaze. [ 17 ] Uranium glass is pale green to yellow and often has strong fluorescent properties. [ 18 ] Urania has also been used in formulations of enamel and porcelain . [ 19 ] It is possible to determine with a Geiger counter if a glaze or glass produced before 1958 contains urania.
Prior to the realisation of the harmfulness of radiation, uranium was included in false teeth and dentures, as its slight fluorescence made the dentures appear more like real teeth in a variety of lighting conditions. [ 20 ]
Depleted UO 2 (DUO 2 ) can be used as a material for radiation shielding . For example, DUCRETE is a "heavy concrete " material where gravel is replaced with uranium dioxide aggregate; this material is investigated for use for casks for radioactive waste . [ 21 ] Casks can be also made of DUO 2 - steel cermet , a composite material made of an aggregate of uranium dioxide serving as radiation shielding, graphite and/or silicon carbide serving as neutron radiation absorber and moderator, and steel as the matrix, whose high thermal conductivity allows easy removal of decay heat. [ 22 ]
Depleted uranium dioxide can be also used as a catalyst , e.g. for degradation of volatile organic compounds in gaseous phase, oxidation of methane to methanol , and removal of sulfur from petroleum . It has high efficiency and long-term stability when used to destroy VOCs when compared with some of the commercial catalysts , such as precious metals , TiO 2 , and Co 3 O 4 catalysts. Much research is being done in this area, DU being favoured for the uranium component due to its low radioactivity. [ 23 ]
The use of uranium dioxide as a material for rechargeable batteries is being investigated. [ 24 ] The batteries could have a high power density and a reduction potential of -4.7 V per cell. [ 25 ] Another investigated application is in photoelectrochemical cells for solar-assisted hydrogen production where UO 2 is used as a photoanode . In earlier times, uranium dioxide was also used as heat conductor for current limitation (URDOX-resistor), which was the first use of its semiconductor properties. [ citation needed ]
Uranium dioxide displays strong piezomagnetism in the antiferromagnetic state, observed at cryogenic temperatures below 30 kelvins . Accordingly, the linear magnetostriction found in UO 2 changes sign with the applied magnetic field and exhibits magnetoelastic memory switching phenomena at record high switch-fields of 180,000 Oe. [ 26 ] The microscopic origin of the material magnetic properties lays in the face-centered-cubic crystal lattice symmetry of uranium atoms, and its response to applied magnetic fields. [ 27 ]
The band gap of uranium dioxide is comparable to those of silicon and gallium arsenide , near the optimum for efficiency vs band gap curve for absorption of solar radiation, suggesting its possible use for very efficient solar cells based on Schottky diode structure; it also absorbs at five different wavelengths, including infrared, further enhancing its efficiency. Its intrinsic conductivity at room temperature is about the same as of single crystal silicon. [ 28 ]
The dielectric constant of uranium dioxide is about 21.5, [ 29 ] which is almost twice as high as of silicon (11.7) [ 30 ] and GaAs (12.4). [ 31 ] This is an advantage over Si and GaAs in the construction of integrated circuits , as it may allow higher density integration with higher breakdown voltages and with lower susceptibility to the CMOS tunnelling breakdown. [ 32 ]
The Seebeck coefficient of uranium dioxide at room temperature is about -750 μV/K, a value significantly higher than the -270 μV/K of thallium tin telluride (Tl 2 SnTe 5 ) and thallium germanium telluride (Tl 2 GeTe 5 ) [ 32 ] and the −170 μV/K ( n-type ) / 160 μV/K ( p-type ) of bismuth telluride , [ 33 ] other materials promising for thermoelectric power generation applications [ 32 ] and Peltier elements . [ citation needed ]
The radioactive decay impact of the 235 U and 238 U on its semiconducting properties was not measured as of 2005 [update] . Due to the slow decay rate of these isotopes, it should not meaningfully influence the properties of uranium dioxide solar cells and thermoelectric devices, but it may become an important factor for high-performance integrated circuits . Use of depleted uranium oxide is necessary for this reason. The capture of alpha particles emitted during radioactive decay as helium atoms in the crystal lattice may also cause gradual long-term changes in its properties. [ 32 ]
The stoichiometry of the material dramatically influences its electrical properties. For example, the electrical conductivity of UO 1.994 is orders of magnitude lower at higher temperatures than the conductivity of UO 2.001 . [ 32 ]
Uranium dioxide, like U 3 O 8 , is a ceramic material capable of withstanding high temperatures (about 2300 °C, in comparison with at most 200 °C for silicon or GaAs), [ 32 ] making it suitable for high-temperature applications like thermophotovoltaic devices. [ citation needed ]
Uranium dioxide is also resistant to radiation damage, [ 32 ] making it useful for rad-hard devices [ citation needed ] for special military and aerospace applications. [ 32 ]
A Schottky diode of U 3 O 8 and a p-n-p transistor of UO 2 were successfully manufactured in a laboratory. [ 34 ]
Uranium dioxide is known to be absorbed by phagocytosis in the lungs. [ 35 ]
|
https://en.wikipedia.org/wiki/Uranium_dioxide
|
Uranium ditelluride is an inorganic compound with the formula UTe 2 . It was discovered to be an unconventional superconductor in 2018. [ 1 ]
Superconductivity in UTe 2 appears to be a consequence of triplet electrons spin-pairing . [ 2 ] The material acts as a topological superconductor , stably conducting electricity without resistance even in high magnetic fields . [ 1 ] With recent crystal growth techniques a superconducting transition temperature of 2.10 K has been reached as of 2025. [ 3 ]
Charge density waves (CDW) [ 4 ] and pair density waves (PDW) [ 5 ] [ 6 ] [ 7 ] have been described in UTe 2 , with the latest case being the first time it has been described in a p-wave superconductor.
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
This article about materials science is a stub . You can help Wikipedia by expanding it .
This electromagnetism -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Uranium_ditelluride
|
Uranium hexoxide is an unusual, theoretically possible compound of uranium in which the uranium atom would be attached to six oxygen atoms. [ 1 ] [ 2 ] Some sources claimed it would be an unprecedented example of an element in the +12 oxidation state; [ 1 ] for comparison, the highest known oxidation state is +9 for iridium in the cation IrO + 4 . [ 3 ] [ 4 ] This oxidation state assignment requires participation of 6p electrons of uranium as valence electrons. This assertion was disputed by a later paper, [ 2 ] which formulates the octahedral species as O(–I) and U(VI), although it does acknowledge that the question of valence shell expansion of uranium and other actinoids is complex and that the "semi-core" 6p electrons of uranium are involved to a non-negligible extent in the bonding of structures such as octahedral UO 6 .
Uranium hexoxide is predicted to have octahedral symmetry ; however, other forms have been studied. In the 1 O h the oxygen atoms are oxide ions (O 2− ). In the 1 D 3 form there are three peroxide ions ( O 2− 2 ). The 3 D 2h form has two oxo oxygens and two pairs of superoxide ( O − 2 ). The octahedral form was calculated to be less energetically favorable than the other geometries though still predicted to be a local energy minimum. [ 2 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
This article about theoretical chemistry is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Uranium_hexoxide
|
Uranium in the environment is a global health concern, and comes from both natural and man-made sources. Beyond naturally occurring uranium, mining, phosphates in agriculture , weapons manufacturing, and nuclear power are anthropogenic sources of uranium in the environment. [ 1 ]
In the natural environment, radioactivity of uranium is generally low, [ 1 ] but uranium is a toxic metal that can disrupt normal functioning of the kidney, brain, liver, heart, and numerous other systems. [ 2 ] Chemical toxicity can cause public health issues when uranium is present in groundwater, especially if concentrations in food and water are increased by mining activity. [ 1 ] The biological half-life (the average time it takes for the human body to eliminate half the amount in the body) for uranium is about 15 days. [ 3 ]
Uranium's radioactivity can present health and environmental issues in the case of nuclear waste produced by nuclear power plants or weapons manufacturing.
Uranium is weakly radioactive and remains so because of its long physical half-life (4.468 billion years for uranium-238 ). The use of depleted uranium (DU) in munitions is controversial because of questions about potential long-term health effects. [ 4 ] [ 5 ]
Uranium is a naturally occurring element found at low levels within all rock, soil, and water. This is the highest-numbered element to be found naturally in significant quantities on Earth. According to the United Nations Scientific Committee on the Effects of Atomic Radiation the normal concentration of uranium in soil is 300 μg/kg to 11.7 mg/kg. [ 6 ]
It is considered to be more plentiful than antimony , beryllium , cadmium , gold , mercury , silver , or tungsten and is about as abundant as tin , arsenic or molybdenum . It is found in many minerals including uraninite (the most common uranium ore), autunite , uranophane , torbernite , and coffinite . [ 7 ] There are significant concentrations of uranium in some substances, such as phosphate rock deposits, and minerals such as lignite , and monazite sands in uranium-rich ores . (It is recovered commercially from these sources.) Coal fly ash from uranium-bearing coal is particularly rich in uranium, and there have been several proposals to "mine" this waste product for its uranium content. [ 8 ] [ 9 ] Because some of the ash produced in a coal power plant escapes through the smokestack, the radioactive contamination released by coal power plants in normal operation is actually higher than that of nuclear power plants. [ 10 ] [ 11 ]
Seawater contains about 3.3 parts per billion (3.3 μg/kg of uranium by weight or 3.3 micrograms per liter ). [ 12 ]
Mining is the largest source of uranium contamination in the environment. [ 1 ] Uranium milling creates radioactive waste in the form of tailings , which contain uranium, radium, and polonium. Consequently, uranium mining results in "the unavoidable radioactive contamination of the environment by solid, liquid and gaseous wastes". [ 13 ]
Seventy percent of global uranium resources are on or adjacent to traditional [ clarification needed ] lands belonging to Indigenous people, and perceived environmental risks associated with uranium mining have resulted in environmental conflicts involving multiple actors, in which local campaigns have become national or international debates. [ 14 ]
Some of these environmental conflicts have limited uranium exploration. Incidents at Ranger Uranium Mine in the Northern Territory of Australia and disputes over Indigenous land rights led to increased opposition to development of the nearby Jabiluka deposits and suspension of that project in the early 2000s. Similarly, environmental damage from Uranium mining on traditional Navajo lands in the southwestern United States resulted in restrictions on additional mining in Navajo lands in 2005. [ 14 ]
The radiation hazards of uranium mining and milling were not appreciated in the early years, resulting in workers being exposed to high levels of radiation. Inhalation of radon gas caused sharp increases in lung cancers among underground uranium miners employed in the 1940s and 1950s. [ 15 ]
Military activity is a source of uranium, especially at nuclear or munitions testing sites. Depleted uranium (DU) is a byproduct of uranium enrichment that is used for defensive armor plating and armor-piercing projectiles . Uranium contamination has been found at testing sites in the UK, in Kazakhstan, and in several countries as a result of DU munitions used in the Gulf War and the Yugoslav wars . [ 1 ] During a three-week period of conflict in 2003 in Iraq , 1,000 to 2,000 tonnes of DU munitions were used. [ 16 ]
Combustion and impact of DU munitions can produce aerosols that disperse uranium metal into the air and water where it can be inhaled or ingested by humans. [ 17 ] A United Nations Environment Programme (UNEP) study has expressed concerns about groundwater contamination from these munitions. [ 18 ] Studies of DU aerosol exposure suggest that uranium particles would quickly settle out of the air, [ 19 ] and thus should not affect populations more than a few kilometres from target areas. [ 17 ]
The nuclear power industry is also a source of uranium in the environment in the form of radioactive waste or through nuclear accidents such as Three Mile Island or the Chernobyl disaster . [ 14 ] Perceived risks of contamination associated with this industry contribute to the anti-nuclear movement . [ 14 ]
In 2020, there were over 250,000 metric tons of high-level radioactive waste being stored globally in temporary containers. This waste is produced by nuclear power plants and weapons facilities, and is a serious human health and environmental issue. There are plans to permanently dispose of high-level waste in deep geological repositories , but none of these are operational. Corrosion of aging temporary containers has caused some waste to leak into the environment. [ 20 ]
As spent uranium dioxide fuel is very insoluble in water, it is likely to release uranium (and fission products ) even more slowly than borosilicate glass when in contact with water. [ 21 ]
Soluble uranium salts are toxic, though less so than those of other heavy metals such as lead or mercury . The organ which is most affected is the kidney . Soluble uranium salts are readily excreted in the urine , although some accumulation in the kidneys does occur in the case of chronic exposure. The World Health Organization has established a daily "tolerated intake" of soluble uranium salts for the general public of 0.5 μg/kg body weight (or 35 μg for a 70 kg adult): exposure at this level is not thought to lead to any significant kidney damage. [ 22 ] [ 23 ]
Tiron may be used to remove uranium from the human body, in a form of chelation therapy . [ 24 ] Bicarbonate may also be used as uranium (VI) forms complexes with the carbonate ion.
Uranium mining produces toxic tailings that are radioactive and may contain other toxic elements such as radon . Dust and water leaving tailing sites may carry long-lived radioactive elements that enter water sources and the soil, increase background radiation , and eventually be ingested by humans and animals. A 2013 analysis in a medical journal found that, "The effects of all these sources of contamination on human health will be subtle and widespread, and therefore difficult to detect both clinically and epidemiologically." [ 25 ] A 2019 analysis of the global uranium industry said that the industry was shifting mining activities toward the Global South where environmental regulations are typically less stringent; and that people in impacted communities would "surely experience adverse environmental consequences" and public health issues arising from mining activities carried out by powerful multi-national corporations or mining companies based in foreign countries. [ 26 ]
In 1950, the US Public Health service began a comprehensive study of uranium miners, leading to the first publication of a statistical correlation between cancer and uranium mining, released in 1962. [ 27 ] The federal government eventually regulated the standard amount of radon in mines, setting the level at 0.3 WL on January 1, 1969. [ 28 ]
Out of 69 present and former uranium milling sites in 12 states, 24 have been abandoned, and are the responsibility of the US Department of Energy . [ 29 ] Accidental releases from uranium mills include the 1979 Church Rock uranium mill spill in New Mexico, called the largest accident of nuclear-related waste in US history, and the 1986 Sequoyah Corporation Fuels Release in Oklahoma. [ 30 ]
In 1990, Congress passed the Radiation Exposure Compensation Act (RECA), granting reparations for those affected by mining, with amendments passed in 2000 to address criticisms with the original act. [ 27 ]
The use of depleted uranium (DU) in munitions is controversial because of questions about potential long-term health effects. [ 4 ] [ 5 ] [ 31 ] Normal functioning of the kidney , brain , liver , heart , and numerous other systems can be affected by uranium exposure, because uranium is a toxic metal . [ 2 ] Some people have raised concerns about the use of DU munitions because of its mutagenicity, [ 32 ] teratogenicity in mice, [ 33 ] [ 34 ] neurotoxicity , [ 35 ] and its suspected carcinogenic potential. Additional concerns address unexploded DU munitions leeching into groundwater over time. [ 36 ]
The toxicity of DU is a point of medical controversy. Multiple studies using cultured cells and laboratory rodents suggest the possibility of leukemogenic , genetic , reproductive , and neurological effects from chronic exposure. [ 4 ] A 2005 epidemiology review concluded: "In aggregate the human epidemiological evidence is consistent with increased risk of birth defects in offspring of persons exposed to DU." [ 37 ] The World Health Organization states that no risk of reproductive, developmental, or carcinogenic effects have been reported in humans due to DU exposure. [ 38 ] [ 39 ] This report has been criticized by Dr. Keith Baverstock for not including possible long-term effects. [ 40 ]
Most scientific studies have found no link between uranium and birth defects, but some claim statistical correlations between soldiers exposed to DU, and those who were not, concerning reproductive abnormalities.
One study found epidemiological evidence for increased risk of birth defects in the offspring of persons exposed to DU. [ 37 ] Several sources have attributed an increased rate of birth defects in the children of Gulf War veterans and in Iraqis to inhalation of depleted uranium. [ 34 ] [ 41 ] A 2001 study of 15,000 Gulf War combat veterans and 15,000 control veterans found that the Gulf War veterans were 1.8 (fathers) to 2.8 (mothers) times more likely to have children with birth defects. [ 42 ] A study of Gulf War Veterans from the UK found a 50% increased risk of malformed pregnancies reported by men over non-Gulf War veterans. The study did not find correlations between Gulf war deployment and other birth defects such as stillbirth, chromosomal malformations, or congenital syndromes. The father's service in the Gulf War was associated with increased rate of miscarriage, but the mother's service was not. [ 43 ]
Uranium causes reproductive defects and other health problems in rodents , frogs and other animals. Uranium was also shown to have cytotoxic, genotoxic and carcinogenic effects in animals. [ 44 ] [ 45 ] It has been shown in rodents and frogs that water-soluble forms of uranium are teratogenic . [ 37 ] [ 33 ] [ 34 ]
Bacteria and Pseudomonadota , such as Geobacter and Burkholderia fungorum (strain Rifle), can reduce and fix uranium in soil and groundwater . [ 46 ] [ 47 ] [ 48 ] These bacteria change soluble U(VI) into the highly insoluble complex-forming U(IV) ion, hence stopping chemical leaching .
It has been suggested that it is possible to form a reactive barrier by adding something to the soil which will cause the uranium to become fixed. One method of doing this is to use a mineral ( apatite ) [ 49 ] while a second method is to add a food substance such as acetate to the soil. This will enable bacteria to reduce the uranium(VI) to uranium(IV), which is much less soluble. In peat -like soils, the uranium will tend to bind to the humic acids ; this tends to fix the uranium in the soil. [ 50 ]
|
https://en.wikipedia.org/wiki/Uranium_in_the_environment
|
In materials science and materials engineering , uranium metallurgy is the study of the physical and chemical behavior of uranium and its alloys . [ 1 ]
Commercial-grade uranium can be produced through the reduction of uranium halides with alkali or alkaline earth metals . Uranium metal can also be made through electrolysis of K U F 5 or UF 4 , dissolved in a molten CaCl 2 and NaCl . Very pure uranium can be produced through the thermal decomposition of uranium halides on a hot filament.
The uranium isotope 235 U is used as the fuel for nuclear reactors and nuclear weapons . It is the only isotope existing in nature to any appreciable extent that is fissile, that is, fissionable by thermal neutrons. The isotope 238 U is also important because it absorbs neutrons to produce a radioactive isotope that subsequently decays to the isotope 239Pu (plutonium), which also is fissile. Uranium in its natural state comprises just 0.71% 235 U and 99.3% 238 U, and the main focus of uranium metallurgy is the enrichment of uranium through isotope separation .
|
https://en.wikipedia.org/wiki/Uranium_metallurgy
|
The relationship between uranium mining and the Navajo people began in 1944 in northeastern Arizona , northwestern New Mexico , and southeastern Utah .
In the 1950s, the Navajo Nation was situated directly in the uranium mining belt that experienced a boom in production, and many residents found work in the mines. Prior to 1962, the risks of lung cancer due to uranium mining were unknown to the workers, and the lack of a word for radiation in the Navajo language left the miners unaware of the associated health hazards . [ 1 ] The cultural significance of water for the Navajo people and the environmental damage to both the land and livestock inhibits the ability of the Navajo people to practice their culture. [ 2 ]
The Navajo Nation was affected by the United States' largest radioactive accident during the Church Rock uranium mill spill in 1979 when a tailings pond upstream from Navajo County breached its dam and sent radioactive waste down the Puerco River, injuring people and killing livestock. [ 3 ]
On the Navajo Nation , approximately 15% of people do not have access to running water . [ 4 ] Navajo Nation residents are often forced to resort to unregulated water sources that are susceptible to bacteria , fecal matter , and uranium. Extensive uranium mining in the region during the mid-20th century is a contemporary concern because of contamination of these commonly used sources, in addition to the lingering health effects of exposure from mining.
Water on the Navajo Nation currently has an average of 90 micrograms per liter of uranium, with some areas reaching upwards of 700 micrograms per liter. [ 5 ] In contrast, the Environmental Protection Agency (EPA) considers 30 micrograms per liter the safe amount of uranium to have in water sources. [ 6 ] Health impacts of uranium consumption include kidney damage and failure , as kidneys are unable to filter uranium out of the bloodstream. [ 7 ] There is an average rate of End Stage Renal Disease of 0.63% in the Navajo Nation, a rate significantly higher than the national average of 0.19%. [ 8 ]
The U.S. Environmental Protection Agency (EPA) has been cleaning up uranium mines on the Navajo Nation since as part of settlements through the Superfund since 1994. The Abandoned Mine Land program and Contaminated Structures Program have facilitated the cleanup of mines and demolition of structures built with radioactive materials. [ 9 ] Criticisms of unfair, inefficient treatment have been made repeatedly of EPA by Navajos and journalists. [ 10 ] [ 11 ] [ 12 ]
In October 2021, the Inter-American Commission on Human Rights agreed to hear a case filed by the Eastern Navajo Diné Against Uranium Mining, which accused the United States government of violating the human rights of Navajo Nation members. [ 13 ] Environmental journalist Cody Nelson explains further that: "the US government and its Nuclear Regulatory Commission (NRC) have violated their human rights by licensing uranium mines in their communities" (Nelson, "Ignored for 70 Years': Human Rights Group to Investigate Uranium Contamination on Navajo Nation"). Nelson also describes that "There is no moral value in having an international human rights body lay bare the abuses of the nuclear industry and the US government's complicity in those abuses." [ 14 ]
In 1944, uranium mining under the U.S military's Manhattan Project began on Navajo Nation lands and on Lakota Nation lands. On August 1, 1946, the responsibility for atomic science and technology was transferred from the military to the United States Atomic Energy Commission . Authors Dr. Doug Brugge and Dr. Rob Goble from the National Library of Medicine explain that "After its initial dependence on foreign sources, the US Atomic Energy Commission (AEC) announced in 1948 that it would guarantee a price for and purchase all uranium ore mined in the United States. This initiated a mining "boom" on the Colorado Plateau in New Mexico, Utah, Colorado, and Arizona that replaced a more limited mining industry centered first on radium and then vanadium, which are found in the same easy-to-mine, soft sandstone ore. The US government remained, by law, the sole purchaser of uranium in the United States until 1971, but private companies operated the mines" (Brugge and Goble, "The History of Uranium Mining and the Navajo People"). [ 1 ] Widespread uranium mining began on Navajo and Lakota lands in a nuclear arms race with the Soviet Union throughout the Cold War .
Large uranium deposits were mined on and near the Navajo Reservation in the Southwest , and these were developed through the 20th century. Absent much environmental regulation prior to the founding of the Environmental Protection Agency in 1970 and passage of related laws, the mining endangered thousands of Navajo workers, as well as producing contamination that has persisted in adversely affecting air and water quality, and contaminating Navajo lands.
Private companies hired thousands of Navajo men to work the uranium mines. Disregarding the known health risks of exposure to uranium, the private companies and the United States Atomic Energy Commission failed to inform the Navajo workers about the dangers and to regulate the mining to minimize contamination. As more data was collected, they were slow to take appropriate action for the workers.
In 1951, the U.S. Public Health Service began a human testing experiment on Navajo miners, without their informed consent, during the federal government's study of the long term health effects from radiation poisoning. Navajo pathologist Phillida A. Charley states that "The Navajo miners were never told about the health or environmental effects of mining uranium" and that "Some miners took rocks from the mines to build their homes or chimneys" (Charley, "Walking in Beauty A Navajo scientist confronts the legacy of uranium mining"). The Navajo miners continued to work, unaware of the experiment, nor the significant health impacts. [ 15 ] In 1932, the USPHS began an earlier human testing experiment on African men in their Tuskegee syphilis experiment . The experiment on Navajo mine workers and their families documented high rates of cancers (including Xeroderma pigmentosum ) [ 16 ] and other diseases which manifested from uranium mining and milling contamination. For decades, industry and the government failed to regulate or improve conditions, or inform workers of the dangers. As high rates of illness began to occur, workers were often unsuccessful in court cases seeking compensation, and the states at first did not officially recognize radon illness. In 1990, the US Congress passed the Radiation Exposure Compensation Act , to address cases of uranium poisoning and provide needed compensation, but Navajo Nation applicants provide evidence RECA requirements prevent access to necessary compensation. Congressional modifications to RECA application requirements were made in 2000, and were introduced in 2017 and in 2018. [ 17 ]
Since 1988, the Navajo Nation's Abandoned Mine Lands program [ 18 ] reclaims mines and cleans mining sites, but significant problems from the legacy of uranium mining and milling persist today on the Navajo Nation and in the states of Utah, Colorado, New Mexico, and Arizona. More than a thousand abandoned mines have not been contained and cleaned up, and these present environmental and health risks in Navajo communities. [ 19 ] The Environmental Protection Agency estimates that there are 4000 mines with documented uranium production, and another 15,000 locations with uranium occurrences in 14 western states. [ 20 ] Most are located in the Four Corners area and Wyoming. [ 21 ]
The Uranium Mill Tailings Radiation Control Act (1978) is a United States environmental law that amended the Atomic Energy Act of 1954 and authorized the Environmental Protection Agency to establish health and environmental standards for the stabilization, restoration , and disposal of uranium mill waste . [ 22 ] Cleanup has continued to be difficult, and EPA administers several Superfund sites located on the Navajo Nation.
On April 29, 2005, Navajo Nation President Joe Shirley Jr. signed the Diné Natural Resources Protection Act of 2005 that outlaws uranium mining and processing on Navajo Nation lands.
Pressure for uranium mining increased in the postwar years, when the United States developed resources to compete with the Soviet Union in the Cold War . In 1948, the United States Atomic Energy Commission (AEC) announced it would be the sole purchaser of any uranium mined in the United States, to cut off dependence on imported uranium. The AEC would not mine the uranium; it contracted with private mining companies for the product. [ 23 ] The subsequent mining boom led to the creation of thousands of mines; 92% of all western mines were located on the Colorado Plateau because of regional resources. [ 24 ]
The Navajo Nation encompasses portions of Arizona , New Mexico , and Utah , and their reservation was a key area for uranium mining. More than 1000 mines were established by leases in the reservation. [ 24 ] From 1944 to 1986, an estimated 3,000 to 5,000 Navajo people worked in the uranium mines on their land. [ 25 ] Other work was scarce on and near the reservation, and many Navajo men traveled miles to work in the mines, sometimes taking their families with them. [ 23 ] Between 1944 and 1989, 3.9 million tons of uranium ore were mined from the mountains and plains. [ 26 ]
In 1951, the US Public Health Service began a massive human medical experiment on approximately 4000 Navajo uranium miners, without their informed consent. Neither the miners nor their families were warned of the risks from nuclear radiation and contamination as USPHS continued their experiment. In 1955, USPHS took active control of Native American medical health services from the Bureau of Indian Affairs , and the experiments on nuclear radiation continued. In 1962 it published the first report to show a statistical correlation between cancer and uranium mining. [ 24 ] The federal government finally regulated the standard amount of radon in mines, setting the level at .3 working level (WL) on January 1, 1969, [ 23 ] but Navajo people attending mining schools before working in the mines were still not informed of the health risks from uranium poisoning in 1971. Reports continued to be published from USPHS's non-consensual medical experiments at least until 1998. The Environmental Protection Agency was established on December 2, 1970. But, environmental regulation could not repair the damage already suffered. Navajo miners contracted a variety of cancers including lung cancer at much higher rates than the rest of the U.S. population, and they have suffered higher rates of other lung diseases caused by breathing in radon. [ 23 ]
Private companies resisted regulation through lobbying Congress and state legislatures. In 1990, the United States Congress finally passed the Radiation Exposure Compensation Act (RECA), granting reparations for those affected by the radiation . The act was amended in 2000 to address criticisms and problems with the original legislation. [ 24 ]
The tribal council and Navajo delegates remained in control of mining decisions before the adverse health effects of mining were identified. [ 27 ] No one fully understood the effect of radon exposure for miners , as there was insufficient data before the expansion of mining. [ 28 ] [ 29 ]
On July 16, 1979, the tailings pond at United Nuclear Corporation's uranium mill in Church Rock, New Mexico , breached its dam. More than 1,000 tons of radioactive mill waste and 93 million gallons of acidic, radioactive tailings solution and mine effluent flowed into the Puerco River , and contaminants traveled 80 miles (130 km) downstream to Navajo County, Arizona . [ 3 ] The flood backed up sewers, affected nearby aquifers and left stagnating, contaminated pools on the riverside. [ 30 ] [ 31 ] [ 32 ] Professor and chair of the Department of Public Health and Sciences Health Net Inc., Dr. Doug Brugge explains that "residents in proximity to the mine site were almost entirely Navajo and relied on nearby Puerco River as a watering source for their livestock. In addition, local medicine men derived remedies from the native plants that grew along the riverbank, and children played in the river during hot summer months" (Brugge, "The Sequoyah Corporation Fuels Releases and the Church Rock Spill: Unpublicized Nuclear Releases in American Indian Communities"). The affected aquifers were primarily used by the Navajo Nation which had severe impacts on their health and way of life. [ 30 ]
More radioactivity was released in the spill than in the Three Mile Island accident that occurred four months earlier. [ 33 ] It has been reported as the largest radioactive accident in U.S. history.
The state contingency plan relied on English-only notification of the largely Navajo public populace affected by the spill. Local residents did not learn immediately of the toxic danger. [ 33 ] The locals were accustomed to using the riverside for recreation and herb gathering. Residents who waded in the acidic water went to the hospital complaining of burning feet and were misdiagnosed with heat stroke. Sheep and cattle died en masse. [ 31 ] Brugge states that "In August 1979, the chairman of the Navajo Tribal Council's Emergency Sercives Coordinating Committee sent a telegram to the Governor of New Mexico requesting that he declared a state of emergency and that McKinley County be declared a disaster area. The request was denied. It was the first of many denials for assistance, which resulted in significant downplay of a nuclear release" (Brugge, "The Sequoya Corporation Fuels Release and the Church Rock Spill: Unpublicized Nuclear Releases in American Indian Communities").This further limited the amount of disaster relief the Navajo Nation received. [ 30 ]
For nearly two years, the state and federal government trucked in water to the reservation, but ended the program in 1981. Farmers had little choice but to resume use of the river for watering livestock and crops. [ 34 ]
Concerned over the adverse health consequences which Europeans experienced from uranium mines, William Bale and John Harley conducted an independent study. Their work led the US government to start the United States Public Health Study (USPHS) on uranium mine workers. Bale and Harley's studies focused on identifying the level of radon in mines and assessing any correlation with disease , specifically lung cancer. Radon, they found, can attach to mine dust, which would be inhaled and subsequently concentrated in the lung tissue. Because of this action, workers breathed radon gas at concentrations up to 100 times higher than the amount of radon gas indicated. [ 24 ] The USPHS was subsequently launched in 1951, with two goals: to identify uranium mine environment exposures, and to conduct a medical evaluation of the miners. [ 24 ]
The USPHS study raised ethical concerns. The Navajo workers were rarely notified of the possible dangers which the USPHS was studying. [ 23 ] As late as 1960, the USPHS medical consent form failed to inform miners about the possible health risks of working in the mine. [ 24 ] The Advisory Committee on Human Radiation Experiments, created in 1994 to explore the treatment of the workers, said: "'Had they been better informed, they could have sought help in publicizing the fact that working conditions in the mines were extremely hazardous, which might have resulted in some mines being ventilated earlier than they were." [ 24 ] The USPHS failed to abide by a centerpiece of Nuremberg Code (1947), by failing to have informed consent of the subjects of a research study. [ 23 ]
In 1952, the USPHS issued two reports, reporting exceptionally high concentrations of radon in these uranium mines, even higher than those found in European mines years before. [ 24 ] Medically, there was little evidence found of sickness. But, the latency from exposure to disease, also found among the European cases, explains why there were few medical effects observed at this early stage. [ 24 ] In a private meeting between the AEC and the USPHS, the AEC informed the USPHS scientists that not only could the high radon levels eventually cause cancer, but proper ventilation of the mines could avoid the problem. [ 23 ] The government failed to take any action on this finding. [ 23 ]
The USPHS continued to study the uranium miners, eventually including 4,000 American Indian and non-Indian underground uranium miners. They added miners in 1951, 1953, 1954, 1957 and 1960. [ 24 ] In 1962, the USPHS published the first account of the effects of radon exposure. It found a significant correlation between radon exposure and cancer. [ 23 ] Additional studies were published in 1968, 1973, 1976, 1981, 1987, 1995 and 1997; these demonstrated linear relationships between radon exposure and lung cancer, a latency period of about 20 years between radon exposure and health effects, and noted that, while smoking tobacco caused a shorter latency period for the development of cancer, it did not fully explain the relationship between radon and cancer. [ 24 ] Similar reports found instances of other diseases such as pneumoconiosis , tuberculosis , chronic obstructive pulmonary disease (COPD), as well as diseases of the blood . [ 24 ] A 2000 study of the number of cancer cases among Navajo uranium mine workers concluded that the miners were 28.6 times more likely to contract the disease than the study's control group. [ 35 ]
Many miners died from radiation-related illnesses. A 1995 report published by American Public Health Association found:
excess mortality rates for lung cancer, pneumoconioses and other respiratory diseases, and tuberculosis for Navajo uranium miners. Increasing duration of exposure to underground uranium mining was associated with increased mortality risk for all three diseases… The most important long-term mortality risks for the Navajo uranium miners continue to be lung cancer and pneumoconioses and other nonmalignant respiratory diseases. [ 36 ]
Over the decades, Navajo miners extracted some four million tons of uranium ore, which was used by the U.S. government primarily to make nuclear weapons. Some miners, unaware of the adverse health effects, carried contaminated rocks and tailings from local mines to build their family homes. These were found to be contaminated, with the family at risk. In 2009, those homes began to be demolished and rebuilt under a new government program, which involved temporarily relocating occupants until the homes could be rebuilt. [ 37 ]
Dr. Leon Gottlieb, a pulmonary specialist was the first physician that noted an increase in lung disorders among the Navajo uranium miners. He would later report in a 1982 study that showed of the 17 Navajos that were being observed for lung disorders in this case lung cancer, 16 of the Navajos were uranium miners. [ 38 ] Along with studies regarding the correlation between uranium miners and lung cancer there have been other studies that suggest that miscarriages, birth defects, reproductive, bone and gastric cancer along with heart disease deaths have also been identified as related health effects of uranium mining (Churchill 1986, Gofman 1981, McLeod 1985). [ 38 ] Even just living near a uranium mill mining area has been linked to birth defects among babies with mothers who live close to the mill, lung cancer, leukemia, cell damage, renal cancer, and stomach cancer . A study was conducted to compare residents who are close to the mining areas and those who are distant. The results show that the residents living near the mining areas suffered from:
Dr. Joseph Wagoner, a health expert collected data regarding the health effects of uranium since 1960 for the US Public Health Service, would report that from 1960 to 1974 there were 144 cancer deaths among 3,500 miners, 700 to 800 of whom were Navajo. [ 38 ] Statistically, approximately 30 deaths would have been expected instead of the 144 which were discovered (Bergman 1982). Apart from respiratory diseases and other significant health problems the American Indian communities experienced psycho-social problems, such as depression and anxiety. [ 40 ] Residents near the uranium mills reported increased levels of anxiety due to their proximity to the uranium mills and the health hazards of their living conditions along with the lack of awareness among the workers as they would bring contaminated rocks back to their homes.
A study was conducted by the National University of General Martín, Avda Gral Paz to review the cellular consequences of the inhalation of uranium compounds. The accumulation of both insoluble and soluble uranium in macrophages (since macrophages are among the main cells to respond to internalized metallic particles) demonstrated that the exposure to both uranium compounds by inhalation resulted in the breakage of DNA strands along with an increase of inflammatory cytokines and hydro-peroxides production. [ 41 ] This reviewed the molecular impacts of uranium contamination that could result in respiratory diseases (neoplasia and fibrosis). [ 41 ]
Following the publication of the reports in the early 1950s, some private contractors attempted to properly ventilate their mines. The states of Colorado, New Mexico and Utah established minimum standards for radon concentrations (Dawson and Madsen 2007). But, the AEC was lax in enforcement of the rules; AEC commissioners did not establish national radon standards at the time the studies were released. [ 24 ] The AEC said it had no authority to regulate uranium, but it regulated beryllium . The health and activist communities have criticized the AEC for its failure to take action related to the scientific reports. The agency repressed the reports. [ 24 ]
Government and uranium industry personnel were privy to the information, but it was not until the 1960s that workers were informed of the environmental dangers. [ 24 ] The government response continued to be slow. Regulation of the uranium industry was first debated in Congress in 1966, but little progress was made. Journalists began to publish stories detailing the illnesses of uranium miners, giving them public attention. [ 23 ] In 1969, Congress set the standard radon level for mines at .3 WL. [ 23 ]
Navajo miners began to file lawsuits to seek compensation for health damages, but often lost in court. But the publicity, presentation of harmful evidence, and victim testimony gave support to their cause. [ 24 ] Ted Kennedy (D-MA) was the first senator to propose a Radiation Compensation bill, with the goal of avoiding lawsuits and compensating victims fully, though it was defeated in 1979. Orrin Hatch 's (R-UT) 1981 compensation bill was met with a similar fate, and his attempt in 1983 did not reach the Senate floor. [ 24 ]
In 1989, Orrin Hatch, supported by fellow Utah Representative Wayne Owens (D-UT), sponsored the Radiation Exposure Compensation Act (RECA), which was signed into law by President George H. W. Bush on October 15, 1990. [ 24 ] The Radiation Exposure Compensation Act (RECA): "Offers an apology and monetary compensation to individuals who contracted certain cancers and other serious diseases following their exposure to radiation released during above-ground atmospheric nuclear weapons tests or, following their occupational exposure to radiation while employed in the uranium industry during the build-up to the Cold War." [ 42 ] The United States Department of Justice established regulations for implementing the act, related to individuals eligible for payment, and guidelines for identification, including marriage licenses , birth certificates and official documents, some of which the Navajo did not possess. In some cases, the government did not recognize individuals' documentation as legitimate. [ 24 ]
With additional data from the studies by the Public Health Service (PHS), in 2000 the act was amended to correct shortcomings: "The RECA Amendments of 2000 broadened the scope of eligibility for benefits to include two new occupationally exposed claimant categories (uranium mill workers and uranium ore transporters), expanding both the time periods and geographic areas covered, and adding compensable diseases, thus allowing more individuals to be eligible to qualify." [ 43 ] As of November 17, 2009, the government has paid claims of 21,810 people, denied 8,789, and paid $1,455,257,096 in reparations. [ 44 ]
There was also a band placed on uranium mining which details that companies have no right to mine or process uranium in Navajo Indian County. Dr. Tommy Rock, a member of the Navajo Nation from Monument Valley, Utah and a doctor of earth science and environmental sustainability states "Whether drilling on Navajo Trust land, fee simple land, or federal land, mining companies have no right to be drilling in Navajo Indian Country" (Rock, "Guest Column: Navajo Nation, Take Action Now to Stop New Uranium Mining"). [ 45 ]
The Navajo Nation Abandoned Mine Land (s) (NN AML) are numerous United States Environmental Protection Agency -designated "AML sites" on lands of the Navajo people which were used for mining (e.g., uranium) . Sites include:
"During the late 1990s, portions...were closed by the Navajo Nation Abandoned Mine Land program". [ 47 ]
EPA (Environmental Protection Agency) maintains a partnership with the Navajo Nation . Since 1994, the Superfund Program has provided technical assistance and funding to assess potentially contaminated sites and to develop a response. The EPA has entered into enforcement agreements and settlements valued at over $1.7 billion to reduce the highest risks of radiation exposure to the Navajo people from AUMs (Abandoned Uranium Mines). As a result, funds are available to begin the assessment and cleanup process at 219 of the 523 abandoned uranium mines as of May 2019. [ 49 ]
The Abandoned Uranium Mine Settlement fact sheet provides information on the abandoned-uranium-mines enforcement agreements and settlements to address abandoned uranium mines on the Navajo Nation. To learn more about EPA's Superfund legal agreements please visit Negotiating Superfund Settlements. Uranium mining took place on the Navajo Nation from 1944 to 1986, and some local residents used materials from uranium mines when building their homes Mining materials that were used can potentially lead to exposure exceeding background (naturally occurring) levels. These materials include ore and waste rock used for foundations, walls, or fireplaces; mine tailings mixed into cement used for foundations, floors, and cinder block walls; and other contaminated building materials (wood, metal, etc.) that may have been salvaged from the abandoned mine areas. [ 50 ]
The EPA and the Navajo Nation Environmental Protection Agency's Archived 2019-12-08 at the Wayback Machine (NNEPA) Contaminated Structures Program evaluates structures on Navajo Nation that may have been constructed using abandoned mine materials or built on or near abandoned mines. The Contaminated Structures Program is responsible for conducting evaluations of potentially contaminated structures, yards and material, as well as removal and cleanup of contaminated structures and materials if there is an exposure risk. The program is for Navajo residents living close to mines or who know their home was built with contaminated materials. Participation in the program is voluntary and at no cost to the resident. USEPA and NNEPA have completed over 1,100 assessments on Navajo Nation since the program began in 2007. [ 51 ]
This specific Superfund site for the AUMs on Navajo land has been in existence since 1994. This is following many years of research on the health effects of uranium mining which eventually led to the Radiation Exposure Compensation Act in 1990. Since its acceptance as a Superfund site, many federal, tribal, and grassroots organizations have come together to assess and remediate contamination sites on the Navajo Nation. Due to the fact that there are hundreds of contaminated sites, there have been a few big successes and many communities stuck in limbo. The following is a history of this Superfund site, the organizations that have collaborated on this environmental remediation, and recent criticisms of the handling of this large and complicated problem.
The Abandoned Uranium Mines on the Navajo Nation were established as a Superfund site in 1994 in response to a Congressional hearing brought by the Navajo Nation on November 4, 1993. This hearing included the Environmental Protection Agency (EPA), the Department of Energy (DOE), and the Bureau of Indian Affairs (BIA). Superfund status stems from the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) which allows the United States federal government to assign funds for environmental remediation of uncontrolled hazardous waste sites. [ 52 ] The Navajo Nation is located in Region 9 (Pacific Southwest) of the Superfund which serves Arizona , California , Hawaii , Nevada , the Pacific Islands , and Tribal Nations. The site's official EPA # is NNN000906087 and it is located in Congressional District 4. According to the EPA's Superfund site overview, other names for the AUMs may include "Navajo Abandoned Uranium Mines" or "Northeast Church Rock Mine." Church Rock Mine is one of the EPA's most successful clean-up sites among over 500 sites spanning the 27,000 square mile Navajo Nation. [ 53 ]
Nearly four years after the initial Congressional hearing, the EPA announced their first helicopter survey for the AUMs in September 1997. Located in the Oljato area in Southeastern Utah near the Utah-Arizona border, this was first of several helicopter surveys that aimed to measure "naturally occurring radiation ( gamma radiation ) coming from abandoned uranium mining areas." The stated purpose of these surveys was to "determine if these sites pose a risk to the people in the area and if so, what measures should be taken to minimize that risk." [ 54 ]
Over ten years later, on June 9, 2008, the EPA announced its five-year plan for the clean-up of uranium contamination on the Navajo Nation. [ 55 ] This five-year plan contained nine specific objectives for 2008–2012: assess up to 500 contaminated structures and remediate those that pose a health risk; assess up to 70 potentially contaminated water sources and assist those affected by it; assess and require cleanup of AUMs via a tiered ranking system of high priority mines; clean Church Rock Mine, the highest-priority mine; remediate groundwater of abandoned uranium milling sites; assess the Highway 160 site; assess and clean Tuba City Dump; assess and treat health conditions for populations near AUMs; and lastly to summarize the action of the Nuclear Regulatory Commission (NRC) in its assistance to the Navajo Nation's cleanup efforts. Since the introduction of the five-year plan, the EPA has released a progress report (available online) each consecutive year. As of August 2011, the EPA lists its accomplishments as: screening 683 structures, sampling 250 unregulated water sources and shutting down 3 such contaminated sources, provision of public outreach and educational programs for safe water practices, instituting a 2.6 million dollar water hauling feasibility project, and providing up to 386 homes with clean drinking water through a 20 million dollar project with Indian Health Services. For 2012, the EPA has listed its next steps as replacing 6 contaminated structures, demolishing other contaminated structures, and continuing screening of these structures for referral to the EPA's Response Program. The 2011 progress report also lists Church Rock, the Oljato Mesa, and the Mariano Lake Mine as sites of current or proposed remediation. [ 56 ]
According to the EPA's website, the AUM Superfund site is not on the National Priorities List (NPL) and has no proposals to be put on this list. The NPL is the list of hazardous Superfund sites that are deemed eligible for long-term environmental remediation. The EPA suggests that although NPL listing is a possibility it is "not likely" for the abandoned uranium mines on the Navajo nation. NPL status guides the EPA in its decisions on sites to further investigate, [ 57 ] a process that has been criticized for the handling of these mines. With over 500 uranium sites and only a few sites slated for full-scale remediation plans, the prioritization process has recently [ when? ] been called into question by The New York Times (see Recent Press).
Superfund works with many agencies from both the federal government and the Navajo Nation in order to properly assess and direct funding to mining sites. These agencies include: the Navajo Nation Environmental Protection Agency (NNEPA), the Indian Health Services (IHS), the Diné Network for Environmental Health (DiNEH), the Navajo Nation Department of Water Resources (NNDWR), the Department of Energy (DOE), and US Nuclear Regulatory Commission (NRC).
The NNEPA was established in 1972 and officially recognized through legislation as a separate regulatory branch of the Navajo Nation in 1995. With the official acceptance of the NNEPA also came the adoption of the Navajo Nation Environmental Policy Act. According to the NNEPA website, their mission is: "With respect to Diné values, to protect human health, land, air and water by developing, implementing and enforcing environmental laws and regulations with a commitment to public participation, sustainability, partnership, and restoration." [ 58 ] (Diné is the word for Navajo in the traditional Navajo language ) NNEPA consults with the US EPA on site assessments (the US EPA is the lead agency for the Site Assessment Project). NNEPA helps the EPA in assessing and deciding which contaminated structures should be demolished and which water sources should be deemed a human health risk. The two also collaborate to perform community outreach for the Navajo people whose lives are affected by the uranium mining. The Center for Disease Control and the DiNEH Project are also integral players in the assessment of water quality and community outreach. The Navajo Nation Department of Water Resources, with funding from the EPA, assist Navajo residents by hauling water for residents near 4 contaminated water sources, a 2.6 million dollar project. Indian Health Services helped fund the 20 million dollar drinking water project started in 2011. This project serves 386 homes near 10 contaminated water sources. The NNEPA, IHS, NNDWR, and DiNEH project have been the main partners with the US EPA in water hauling projects.
Despite the EPA's claims of a "strong partnership with the Navajo Nation," recent articles have been published that call into question the equitability and efficiency of the EPA's action on the abandoned uranium mines. On March 31, 2012, The New York Times published an article entitled "Uranium Mines Dot Navajo Land, Neglected and Still Perilous" [ 59 ] by Leslie MacMillan. The article suggests that politics and money are influencing the prioritization of mine clean-up efforts. David Shafer, an environmental manager at the United States Department of Energy, has said that questions of whether current uranium problems are due to past mining or to the naturally occurring mineral are delaying the process of cleaning up. Similar concerns are common in environmental remediation projects for victims of industrial pollution.
While the EPA does prioritize mines that are nearest to people's homes, MacMillan highlights some remote locations where people do live and yet have been neglected by the EPA. Cameron, Arizona is one such site which has a population of nearly 1000. Rancher Larry Gordy stumbled across an abandoned uranium mine on his grazing land for his cattle near Cameron in the summer of 2010. There are still no warning signs in the town of Cameron to alert people of potential contamination. On December 30, 2010 Scientific American published an article entitled "Abandoned Uranium Mines: An 'Overwhelming Problem' in the Navajo Nation" [ 60 ] by Francie Diep. Diep told Gordy's story and reported that the EPA assessed his site on November 9, 2010. Diep suggested that this date was moved up due to publicity of Gordy's story; originally the EPA had promised to visit within six months of his original discovery of the uranium mine.
Similar allegations of prioritization due to negative publicity for the EPA were made of the Skyline Mine in the Oljato Mesa. Elsie Begay, a 71-year-old Navajo woman from the Oljato region was the topic of a series of articles in The Los Angeles Times in 2006. [ 61 ] These articles were written by Yellow Dirt: An American Story of a Poisoned Land and a People Betrayed (2010) author Judy Pasternak, whose work on these articles led to her book. One EPA representative, Jason Musante, stated this publicity "might have bumped the site up the priority list."
Over a year after Gordy stumbled across the mine in his cattle's grazing land, MacMillan reports that the site at Cameron has yet to be given a priority by the EPA. When EPA officials were asked to accompany a reporter to the Cameron site, the officials declined and instead offered to visit the newly cleaned site in Oljato. MacMillan spoke with a Navajo hotel manager near the Skyline Mine who expressed hesitation about the EPAs remediation, stating, "That's what they want you to see: something that's all nice and cleaned up." MacMillan drew attention to the fact that cows are grazing on contaminated land and people are eating these cattle. Taylor McKinnon, a director at the Center for Biological Diversity , went so far as to say the site was the "worst he had seen in the Southwest." Although the locally grown beef is tested, standard tests for meat do not include checking for radioactive substances like uranium. The EPA has put an emphasis on health effects throughout its five-year plan, so the lack of any sort of attention in this matter has raised eyebrows.
In addition to the questioning of political bias in the prioritization of mining sites, there is criticism of the EPA's decision to revisit a 1989 permit proposing to mine for uranium near Church Rock. New Mexico 's KUNM radio station reported on May 9, 2012 that Uranium Resources Incorporated has expressed interest in starting production near Church Rock by the end of 2013. [ 62 ] An online petition has already gained nearly 10,000 signatures against this new mining initiative.
Beginning in the 1960s, uranium miners were beginning to become ill with cancer at increasing rates. [ 23 ] The state of Utah did not recognize radiation exposure at the time as a category of illness, making workers compensation unattainable for many of the sick Navajo (Dawson and Madsen 2007). Private industry's treatment of the Navajo workers was poor, according to recent standards: companies failed to educate workers on precautionary measures, did not install sufficient engineering controls, such as adequate ventilation; and did not provide sufficient safety equipment to protect workers to the known dangers related to the mines. [ 63 ] The Navajo were never told of the radiation effects, and did not have a word for it in their language . Many Navajo did not speak English and trusted the uranium companies to have their interests in mind. [ 63 ] Navajo workers and residents have felt betrayed as the results of the studies became known, as well as the long delays by companies and the US government to try to prevent the damage, and to pay compensation. [ 63 ] Lung cancer became so prevalent among the Navajo people that working in uranium mines was banned on Navajo lands in 2005. [ 25 ]
Following the Gold King Mine Spill in 2015, farmers lost 75% of their crops due to the lack of clean water. [ 64 ] The EPA provided the Navajo with water, but it was contaminated with oil, poisoning the land and killing the livestock. [ 65 ] Duane Yazzie, a Navajo Tribe member, spoke about the spiritual and cultural importance that agriculture plays in the Navajo culture and how both the oil and uranium contamination infringed upon their ability to practice their culture. [ 64 ] In the case of environmental hazards such as the Gold King mine spill, the EPA offers The Standard Form 95 where claims of economic damages, unemployment, loss of income, or damage to property can be filed as a result of an environmental incident. [ 66 ] The Standard Form 95 is also a form of environmental racism according to Jade Begay, director of policy and advocacy for the Indigenous-led organization NDN Collective. They explain that "The President of the Navajo Nation, Russell Begaye, has announced that he intends to take legal action against the EPA, which has taken full responsibility for this spill. Mr. Begaye has also warned Diné people NOT to use or sign Form 95 for Damage, Injury or Death as a result of Gold King Mine Release" (Begay, "Tó Éí Ííńá (Water Is Life): The Impact of the Gold Mine Spill on the Navajo Nation"). [ 67 ] Ethel Branch , the Navajo Nation attorney general also said this form contained backhanded, offensive language that would diminish one's ability to get full financial compensation and restrict their ability to file additional, future claims. [ 65 ]
White workers also faced different conditions: Navajo workers were forced to enter the mine directly after a detonation, while it was filled with dust and smoke. However, the white workers were able to stay behind. [ 63 ] Navajo miners were paid less than miners from off-reservation, well below minimum wage. [ 68 ] [ 69 ] Until radon exposure safety standards were imposed by the Secretary of Labor Willard Wirtz over the objections of the Atomic Energy Commission and the uranium mining industry in June, 1967, [ 70 ] [ 71 ] mines lacked ventilation, exposing workers to radon .
Widows of mine workers met to discuss their grief; they started a grassroots movement that eventually reached the Congressional floor. [ 23 ]
The Church Rock uranium mill spill raised claims that race was a factor in the federal government's paying little attention to the disaster:
When there was a relatively minor problem at Three Mile Island in Pennsylvania, the entire attention of the Nation was focused on this location and the Federal and State assistance brought to bear to deal with it was extraordinary. When the largest release of radioactive material in the history of the United States occurs in Navajo country, however, the attention paid to it by the Federal and State authorities is minimal at best. [ 72 ]
Not only are the Navajo impacted by implicit racism, but so are all Indigenous People. Crystal Echo Hawk, a dual citizen of the U.S. and the Pawnee Nation of Oklahoma and Indigenous leader states that "Being inclusive of Native Americans in philanthropy does more than address injustice; it also recognizes that Native Americans and tribes are an equally important part of American society as other groups and can be partners to achieve social change across a range of communities and sectors" (Hawk, "Implicit Bias and Native Americans: Philanthropy's Hidden Minority"). [1]
Forgotten People [ 73 ] (FP) is a grassroots organization incorporated on the Navajo Nation which represents the health and well-being of the residents of the Navajo Nation in Arizona. The full name of this organization is Forgotten People Diné Bé Iina' na' hil naa, meaning Diné Rebuilding Communities. Forgotten People began as a political organization dedicated to advocacy for the Navajo people against forced relocation plans which spanned 1974 to 2007. When forced relocation programs were ended in 2007, the organization shifted focus to a broader variety of issues with a focus in environmental remediation. In 2009, Forgotten People received the Environmental Excellence Award from the NNEPA. Forgotten People was an integral aspect of the Black Falls water project, which involved collaboration with the US EPA to provide clean drinking water and educational outreach for the Black Falls community which was affected by uranium mining. FP attributes the success of Black Falls with the evolution "from a needs-based or dependency approach to the agencies into an assumption of full responsibility for their own development." The Black Falls community was able to decide upon their own solutions for their water problems. Their efforts were coordinated by FP and funded by the US EPA. Forgotten People represents an evolving grassroots community which is moving simply from organizing to actually empowering residents to take their development into their own hands. [ 74 ]
Forgotten People also gathers and displays pertinent public records for a variety of issues facing the Navajo on their website. For their campaigns against uranium mining, their website displays all official responses US attempts at relaxing uranium restrictions on Navajo territory. FP also preserves the response of the President of the Navajo Nation in response to proposals for uranium mining near the Grand Canyon . In 2005, the President of the Navajo Nation, Joe Shirley, Jr. , signed the Diné Natural Resources Protection Act which banned uranium mining and processing on Navajo land. After signing the law, President Shirley stated, "As long as there are no answers to cancer, we shouldn't have uranium mining on the Navajo Nation. I believe the powers that be committed genocide on Navajo land by allowing uranium mining." [ 75 ] [ 76 ]
Diné Citizens Against Ruining our Environment ( Diné CARE ), established in 1988 as a grassroots organization, aims to give citizens of the Navajo Nation a voice to protect their environment, culture, and community. The expansion of the organization, over the years, allows for individuals within the Navajo community to share their experiences and build a network of people dedicated to the preservation of the Navajo land and resources. Membership is free and involves being an active advocate for the community that the member lives in. Projects and campaigns that Dine' CARE works on are financed by grant money. [ 77 ] One of the project that Dine' CARE works on is the Navajo Radiation Victims Project. This project helps regions impacted by nuclear waste from uranium mining by visiting communities and gaining first hand accounts from victims. Earl Tulley, who is now Vice President of Dine' Care, believes the project helps all victims of uranium radiation exposure, native or non-native, get the compensation and help that they need. [ 77 ] The organization fights to clean up impacted areas and prevent any future mining on Navajo land. The most notable success for the project was the amendment of the Radiation Exposure Compensation Act (RECA) in 2000. [ 78 ] Dine' CARE helped create the Western States RECA Reform Coalition to expand the scope of compensation for victims not only by extending the geographic regions and time periods covered, but also by adding two new occupational claimants and compensable diseases. [ 79 ]
Many residents of the Navajo Nation have anxiety and concerns about the future because of large amounts of radioactive waste remaining. One Navajo Elder explains: "We, the elderlies, that resides around here don't know what was good and worst about the uranium. There were several deaths in this area that was affected by radiation or cancers. We need help. I lost my wife last year [to cancer] and now I am 87 years. My wife would have been 70 years old which made a lot of difference. I am lonely and can't get anywhere without her help. I was hurted and miserable." [ 63 ] The number of cancer cases has continued to rise because of these conditions, as water, air and ground generally have been affected. In areas near uranium mills, residents suffer stomach cancer at rates 15 times those of the national level. In some areas, the frequency gets as high as 200 times the national average. [ 26 ] Hundreds of abandoned uranium mines with exposed tailings remain unremediated on the Navajo Nation area posing a contamination hazard. [ 80 ] Near the former uranium mills, water contamination and contamination of rocks which many residents used to build their houses, continue to be problems. [ 81 ]
A 1995 report published by American Public Health Association found: "excess mortality rates for lung cancer, pneumoconioses and other respiratory diseases, and tuberculosis for Navajo uranium miners. Increasing duration of exposure to underground uranium mining was associated with increased mortality risk for all three diseases… The most important long-term mortality risks for the Navajo uranium miners continue to be lung cancer and pneumoconioses and other nonmalignant respiratory diseases." That is to say, not stomach cancer, which the Navajo people naturally have a higher rate of experiencing than the national US average. [ 36 ] The descendants of mining families continue to have extremely high rates of ovarian and testicular cancer.
[ 82 ]
The enduring effect of uranium mining continues to contaminate the soil and endanger survival of wild plants. Additionally, livestock's dependency on clean food and water sources that are being slowly lost, and that may not recover, casts uncertainty on the continuity of the Navajo's pastoral lifestyle. [ 83 ]
Scientific consensus has not been reached about the gravity of the public health threat caused by uranium contamination of groundwater on the Navajo Nation. [ 84 ] However, uranium is present in a substantial portion of unregulated groundwater sources used for human consumption. [ 85 ] [ 86 ] The lack of consensus on the risk that this poses to the Navajo may indicate a research deficit that is also seen in other Native American communities. [ 84 ] However, a significant connection between proximity to an abandoned uranium mine and presence of uranium (and arsenic) in groundwater wells has been made, no matter if the elements occur naturally or are a result of the mining process. [ 85 ] Studies have shown an autoimmune response in some Navajos from uranium mine waste that may indicate a risk for people with autoimmune diseases, which are more prevalent in Native Americans. [ 84 ] Chronic lack of access to regulated water sources means that many Navajo people may be drinking from uranium and arsenic-contaminated water.
Since 1994, the Environmental Protection Agency (EPA), along with the Navajo Nation Environmental Protection Agency, has been mapping areas affected with radioactivity. In 2007, they compiled an atlas of the abandoned uranium mills in order to rid the area of nuclear waste . [ 87 ] In 2008, the EPA implemented a five-year cleanup plan, focusing on the most pressing issues: contaminated water and structures. The EPA estimates that 30% of all Navajo people lack access to uncontaminated drinking water. [ 87 ]
The EPA is targeting 500 abandoned uranium mills as another part of their five-year cleanup plan, with the goal of ridding the area of nuclear waste. [ 87 ] Its priority was identification of contaminated water sources and structures; many of the latter have been destroyed and removed. In 2011, it completed a multi-year project of removing 20,000 cubic yards of contaminated earth out of the reservation, near the Skyline Mine, to controlled storage on the plateau. [ 88 ]
In 2017, a $600 million settlement attempts to clean up 94 abandoned uranium mines. [ 89 ]
The EPA and NNEPA prioritized 46 mines (called priority mines) based on gamma radiation levels, proximity to homes and potential for water contamination identified in preliminary assessments documented in the EPA Site Screen Reports. Detailed cleanup investigations will be conducted at these mines by the end of 2019. All documents can be found here. [ 90 ]
All 46 priority mines are in the assessment phase which includes biological and cultural surveys, radiation scanning, and soil and water sampling. These assessments help to determine the extent of contamination. The assessment work at the 46 priority mines will be documented in Removal Site Evaluation reports which will be completed by the end of 2019. These reports will be shared with communities and made available on this website. [ 91 ]
The federal government seeks proposals from businesses to clean up abandoned uranium mines on the Navajo Nation. $220 million available to small businesses to clean up Navajo uranium mines. The funding comes from a $1.7 billion settlement with Tronox, the successor of Kerr-McGee, a company that mined the region. During the Cold War companies extracted nearly 30 million tons of uranium from Navajo land. The EPA says it has funding to assess and clean up 220 of the 520 abandoned mines. The Request for Proposal can be found at www.fedconnect.net in the “Public Opportunities” section by searching Reference Number 68HE0918R0014. Contract proposals will be accepted through May 28, 2019. [ 92 ]
Residents of the Red Water Pond Road area have requested relocation to a new, off-grid village to be located on Standing Black Tree Mesa while cleanup progresses on the Northeast Church Rock Mine Superfund site , as an alternative to the EPA-proposed relocation of residents to Gallup . [ 93 ]
While many mines currently remain closed, the future of renewable energy may lead to their reopening. One mine in particular which was reopened was the Pinyon Plain Mine which sits near the Baii Nwaavio Ancestral Footprints of the Grand Canyon National Monument, where the Havasupai people come from. This mine was reopened in 2022 in according, "when the mine leaks or introduces radionuclides into land and water through normal mining processes, as other nearby mines like the Pinenut Mine and Orphan mine have done, it will contaminate the Havasupai's water source, the Redwall-Muac aquifer, which the Havasupai have a responsibility to protect" (Keeler, Nuclear Injustice: Why a Nuclear Renaissance is the Same Old Colonial Story"). The opening of uranium mines has significant effects on Indigenous communities and their ways of life. The Inflation Reduction Act also focuses $30 billion into nuclear power, which further includes the opening of new mines and the reopening of old ones, many of which are located on Native land. [ 94 ]
In the 2020s, reggae rock band Tha 'Yoties (led by Hopi / Tewa edutainer Ed Kabotie) performed and released music about uranium mining on Navajo lands during their national tours. [ 95 ]
|
https://en.wikipedia.org/wiki/Uranium_mining_and_the_Navajo_people
|
Uranium mining in the Elliot Lake area (prior to 1955, more commonly known as the Blind River area ) represents one of two major uranium-producing areas in Ontario , [ 1 ] and one of seven in Canada. [ 2 ]
In the mid-1950s, the influx of people to Elliot Lake seeking uranium was described by engineer A. S. Bayne in a 1977 report as the "greatest uranium prospecting rush in the world". [ 3 ]
Mining activities peaked around 1959 and 1960 to respond to US military demand for uranium during the Cold War .
By 1958, Canada had become one of the world's leading producers of uranium and the $274 million of uranium exports that year represented Canada's most significant mineral export. [ 4 ] : 1 By 1963, the federal government had purchased more than $1.5 billion of uranium from Canadian producers for export. [ 4 ] The opening of the mines and the workers they attracted led to the creation of the planned town of Elliot Lake.
US demand slumped in the early 1960s, but the increasing use of nuclear power for electricity-generation, in Canada and abroad, prompted some mines back into action.
Production slowed until the 1990s when it ceased. The Elliot Lake area now has ten decommissioned mines and 102 million tons of uranium tailings. Former miners have been left with a twofold increase in lung cancer development and mortality rates. [ 5 ] : iii
The 200 square mile area north of Lake Huron that was Canada's largest uranium producing area has been referred to by various names as time passed, specifically Algoma , Blind River and Elliot Lake . [ 4 ]
Algoma is the name of a wider district that includes this area. Blind River was initially the nearest human settlement, located 12 miles west of the nearest mine, until Elliot Lake was created, which is close to most of the mines. [ 4 ]
The only road access to the town of Elliot Lake is via Ontario Highway 108 . [ 6 ]
Towards the end of the Wisconsin glaciation period, ice flowed approximately south (predominantly at 190°) across the area know known as Elliot Lake. Geologists believe that as the ice sheet retreated back north, it left a large proglacial lake just north of Elliot Lake, probably as part of the main Lake Algonquin . Today's features were created from sediments that sunk while the area was below the 335m deep lake. As the ice retreated, about 10,800 years ago, the ice holding the lake melted, causing the sand and gravel sediments to spill into the valleys. [ 6 ]
Microscopic grains of uranium occur in ores of uraninite , brannerite and monazite amongst pyritic sheets of quartz-pebble rock. [ 6 ]
The area is the traditional territory of the Serpent River First Nation and also part of the Huron Robinson Treaty land. [ 7 ]
In 2021, the Serpent River nation representatives described community consultation about mining activities as "minor." [ 8 ]
Known at the time as the Blind River area, the Elliot Lake area is situated between the Sudbury Nickel mining area and the abandoned Bruce Mines and was subsequently prospected for gold and copper during the 19th century. [ 4 ]
Uranium was first discovered in Canada by John Lawrence LeConte in 1847, who named the new mineral coracite . [ 4 ] The exact location of his first discovery was unclear, but was understood to be approximately 70 miles north of Sault Ste. Marie on the shore of Lake Superior . [ 4 ] A lack of an exact location and the absence of radioactivity detectors resulted in failures of surveyors or prospectors to repeat his find. [ 4 ]
In 1948, Karl Gunterman, financed by Aime Breton, with a Geiger counter discovered radioactive conglomerate near Lauzon Lake in Long Township, Ontario. [ 4 ] [ 9 ] Their discovery was investigated by geologist Franc R. Joubin , who in 1952 found a uranium deposit in Spragge . [ 9 ] [ 10 ]
In 1953, Joubin persuaded Joseph H. Hirshhorn to finance exploratory drilling and Hirshhorn signed a contract with Eldorado Mining and Refining Ltd , the Canadian Crown Corporation that bought all uranium in Canada; together they quickly started the Pronto Mine . [ 10 ] [ 9 ] News of the mine and the 1,400 stakes claimed by Joubin and Hirshorn resulted in a rush of prospectors to the area who filed 8,000 claims that summer. [ 10 ] The uptick in uranium staking was known as the Backdoor Staking Bee. [ 7 ] Mapping by W. H. Collins of the Geological Survey of Canada led to the discovery of more uranium around Quirke Lake and Elliot Lake (the lake proper, not the town of the same name). [ 9 ] By 1958, Eldorado Mining and Refining Ltd estimated that the area had 320 million tons of uranium ore, with on average 2.38 pounds of uranium oxide per ton. [ 10 ]
Throughout the 1950s, the majority of the world's uranium [ 7 ] came from Elliot Lake, which became known as the "Uranium Capital of the World". [ 9 ]
1957 saw bustling activity as contractors blasted paths through rock to make roads, sinking shafts and building uranium processing mills. According to the University of Waterloo 's Earth Sciences Museum, "Never before in the history of Canada has so much money been spent so quickly in one place." [ 9 ]
Throughout the 1950s, the people of the Anishinaabek First Nation of the Serpent River were systematically excluded from all decisions about resource extraction in their area. [ 11 ]
1958 was the first full year of mining production, and saw a $200 million of uranium sales, making uranium Canada's number one metal export, [ 9 ] and Elliot Lake Canada's largest producer. [ 6 ]
From 1959 to 1960, Elliot Lake organized town was created and other mines were constructed to meet the growing US demand for uranium. [ 9 ]
In November 1959, the US announced its plans to stop stockpiling uranium [ 9 ] and to cease procurement after 1962, [ 7 ] resulting in the closure of five mines in 1960. [ 9 ]
However, by 1966 the global demand for uranium for energy purposes prompted increased production in the area, by 1970 the area had produced $1.3 billion of uranium oxide. [ 9 ] Mining companies funded the creation of a Nuclear Museum. [ 9 ]
The mines all started producing between 1955 and 1958, [ 2 ] supplying US military needs. [ 12 ]
When the United States Atomic Energy Commission declared in 1959 that it would no longer stockpile uranium, and not renew procurement contracts beyond 1963, seven of the remaining nine mines closed. [ 12 ] The other two mines, Denison and Nordic , remained open to supply Canadian federal uranium stockpiling needs while Pronto switched activities to supporting the nearby Pater copper mine. [ 12 ] At the same time, Rio Algom Limited was created and became the owner of the seven closed mines, plus the Nordic and Pronto mines. [ 12 ]
The mine closure resulted in the population of Elliot Lake town dropping from about 24,877 [ 10 ] to 6,000 residents, having an immediate negative impact on the local economy. [ 7 ]
Rio Algom later became a subsidiary of BHP . [ 12 ]
In early 1972, Australia, France, South Africa, and Rio Tinto Zinc formed a cartel to control the supply and pricing of uranium, using price fixing and bid rigging. [ 13 ] This continued until the cartel was exposed by Friends of the Earth Australia in 1976. [ 13 ]
The growing demand for uranium for nuclear power stations being built in the 1970s promoted Rio Algom to increase production at Quirke Mine and reopen Panel Mine in 1979 and later Stanleigh Mine (1983). [ 12 ]
Decommissioning started from 1992 [ 2 ] and concluded in 2001 when vegetation was added to Pronto Mine. [ 14 ] Today, all mines are now fully decommissioned, meaning that mine openings are closed up, all buildings are removed and the sites have been revegetated . [ 2 ]
Ontario Hydro cancelled its contract to buy uranium from Rio Algom in 1990 and from Denison Mines in 1992, although Stanleigh Mine continued production until June 1996. [ 12 ]
Currently, Rio Algom owns nine of the mines (Stanleigh, Quirke, Panel, Spanish, American, Milliken, Lacnor, Buckles and Pronto) and Denison Mines owns the others. [ 14 ] [ 2 ]
As of 1980, Elliot Lake supplied 90% of the uranium used in Ontario. [ 15 ]
Mined ore consisted of pyritized quartz conglomerate with 0.1% to 0.2% uranium. [ 12 ] The ore was acid leached to extract the uranium [ 12 ] using sulphuric acid. [ 8 ]
Tailings were neutralized before being deposited, however exposed tailings released acid and radium-226 before barium chloride and lime treatment was started in the 1970s. [ 12 ]
Buckles mine is located on the south of the Quirke Lake syncline , close to the Nordic Mine . [ 4 ] In 1955, Spanish American Mines Limited bought the mine from the original owner of the claim, Buckles Algoma Uranium Mines Limited. [ 4 ]
The uranium ore was reported to be 486,500 tons, at 0.124% U 3 O 8 , located in a ten-feet-thick zone, 75 feet below the surface. [ 4 ]
From 1958 onwards ore from the mine was processed at the Spanish American mine, where it was transported and treated at rate of approximately 500 tons per day. [ 4 ] The mine closed in 1958 after all the ore had been extracted. [ 10 ]
Twelve Mt of ore remains on the shared tailing management area with Nordic Mine under vegetative cover. [ 12 ]
Can-Met's location was first staked by Carl Mattaini who sold it to Can-Met Explorations Limited. [ 4 ] The 1958 reporting indicated 8,362,069 tons of ore, which included 6,642,380 tons of uranium ore, with a partly proven average uranium grade of 1.832 pounds of uranium oxide per ton, after dilution. [ 4 ] The mine is located on the south shore of Quirke Lake , 15 miles from Elliot Lake. [ 4 ]
The mine had two shafts to 2,127 and 2,395 feet. There was a processing plant that could process 3,000 tons of ore per day built in October 1957. [ 4 ] Tailings were deposited in the natural basin south of the mill. [ 16 ]
Denison Mine (also known as Consolidated Denison Mine) is located 10 miles north of Elliot Lake. It is just south of the Quirke Mine, and just west of the Panel and Can-Met Mines, just north of Spanish American and Stanrock mines. [ 4 ] Following successful staking of the Pronto Mine property, mining claims were staked un the summer of 1953 by F. H. Jowsey, A. W. Stollerty and Associates. These stakes were purchased by Consolidated Denison Mines Limited in 1954. Denison undertook geological surveys and diamond drilling. [ 4 ]
The mine started in September 1957, and there was mill on site to process 6,000 tons per day. The average production was 2,676 tons per day and the ore milled had an average of 2.63 pounds of uranium oxide per ton. 1957 estimates of ore reserves were of 136,787,400 tons above another zone 100-feet lower.
63 million tons [ 14 ] of tailings were deposited in Williams Lake, Bear Cub Lake, and Long Lakes. [ 16 ] The mine was decommissioned by Denison Mines in 1997. [ 14 ] [ 17 ]
Lacnor Mine (also known as Lake Nordic Mine) is located on the south limb of the Quirke Lake syncline , four miles from Elliot Lake. [ 4 ] It is located just north of Nordic Mine , and just east of Miliken Mine and just south of Stanleigh Mine . [ 4 ] It was purchased by Northspan Uranium Mines, a subsidiary of Rio Tinto . [ 4 ]
Diamond drilling started in 1954, which found ore. Two shafts were sunk and a processing plant with 3,800 tons per day was constructed. [ 4 ] 1957 reports indicated an ore reserve of 8,289,207 tons that produced an average of 0.101% uranium oxide. [ 4 ] Tailings were deposited in the valley east of the mill. [ 16 ]
The mine closed in 1960 and was decommissioned from 1997 to 2000. [ 12 ] 2.7 Mt of tailings remain on site [ 12 ]
Miliken Lake Mine is located approximately one mile from Elliot Lake. [ 4 ] The site is bounded on the west and the south by Nordic Mine, and on the north by Stanleigh Mine and on the east by Lake Nordic Mine. [ 4 ] The property was first staked in 1953 and purchased by Miliken Lake Uranium Mines Ltd in 1954, before being sold to Rio Tinto in 1956. [ 4 ]
Production started in 1958; a 3,000-ton-per-day ore processing mill was constructed on site. [ 4 ] A 1957 report indicated 7,269,846 tons of ore with an average grade of 0.098% uranium oxide on site, with possible an extra 14 to 18 million tons more. [ 4 ]
Tailings were deposited in Crotch Lake [ 16 ] and Sherriff Creek. [ 18 ] The mine closed in 1964 [ 10 ] and was decommissioned from 1997 to 2000. [ 12 ] 0.08 Mt of tailings remains on site underwater. [ 12 ]
Nordic Mine is located 3 miles east of Elliot Lake, it is bounded by the Quirke Mine to the north. [ 4 ] It was first staked in 1953 by prospectors working for two companies: Technical Mine Consultants and Preston East Dome. Once uranium was discovered, the Algom Uranium Mines Company was formed, which had control over Nordic Mine and the Quirke Mine property. [ 4 ]
A mining shaft was sunk in 1955 and production started in January 1957. There was a processing plant with a 3,000 tons per day capacity built on site. [ 4 ] The mine was bought by Rio Tinto. [ 4 ] 1958 estimates of ore reserves on site were of 11,258,000 tons with an average grade of uranium oxide of 2.65 pound per ton. [ 4 ]
Tailings were deposited in the swamp and in the valley north of the mill [ 16 ] where they remain with the tailings from Buckles Mine , covering a 115.6 hectares. [ 12 ] The mine closed in 1968. [ 12 ]
The Panel Mine is located 13 miles north of Elliot Lake, on the north limb of the Quirke Lake syncline . The site is bordered to the west by the Quirke Mine and Denison Mine and on south by Can-Met Mine. [ 4 ] The site was staked in 1953 by Emerald Glacier Mines Ltd purchased by Panel Consolidated Uranium Mines Ltd 1955, before being sold to Northspan Uranium Mines Limited, a Rio Tinto subsidiary. [ 4 ]
Two shafts were sunk on site to depth of 1,102 and 1,250 feet and a processing plant with 3,000 tons per day capacity was built on site. Production started in 1958. [ 4 ] A 1956 estimate of ore reserves on site was 6,033,000 tons with an average grade of 2.12 pounds of uranium oxide per ton. [ 4 ]
Tailings were deposited in the nearby swamp and in the south west corner of Strike Lake. [ 16 ]
The mine closed in 1961, but reopened in 1979 and operated until 1990. [ 12 ] It was decommissioned from 1992 until 1996. [ 12 ] 16 Mt of tailings remain on site underwater. [ 12 ] The spillways of the dams that hold back the tailings were modified since closure. [ 12 ]
Pronto Mine was the original mine in the Elliot Lake/Blind River area. [ 4 ] Pronto Mine is located in Long Township, 11 miles east of Blind River , close to Ontario Highway 17 and the Canadian Pacific Railway . [ 4 ]
It has a main shaft sunk that was deepened in 1958 and an ore processing plant with 1,250 tons per day capacity upgraded in 1958 to 1,500 tons per day. [ 4 ] Tailings were deposited in the nearby valley and swamp north of the mill. [ 16 ]
When the demand for uranium subsided, the mine switched to copper production, closing in 1970. [ 12 ] 4.4 Mt of tailings remain on site covering 44.7 hectares, the tailing have vegetated cover. [ 12 ]
Quire Mine was owned by Algom Uranium Mines Limited and is located 9 miles north of Elliot Lake, and about 2.5 miles west of the northwest edge of Quirke Lake. [ 4 ] The property was first staked in 1953, trenching and sampling was also done the same year. An 864 feet deep shaft was started in 1954 and finished in 1955 and a processing mill with 3,000 tons per day capacity was built on site. Production started in 1956. [ 4 ]
The company's 1957 annual report indicates 17,942,000 tons of ore reserves, of which 1,409,000 tons had an average grade of 2.31 pounds of uranium oxide per ton. [ 4 ]
Tailings were deposited in Manred Lake, west of the mill. [ 16 ]
The mine closed in 1961, but reopened in 1968 and operated until 1990. [ 12 ] It was decommissioned from 1992 until 1996. [ 12 ] The spillways of the dams that hold back the tailings were modified since closure. [ 12 ] 46 Mt of tailings remain on site, in tiered underwater cells, covering an area of 183.5 hectares. [ 12 ]
The Spanish American Mine is located 9 miles northeast of Elliot Lake, on the north limb of the Quirke Lake trough. It is bounded on the east by Stanrock Mine and on the north by Denison Mine. [ 4 ] The location was first staked by P Westerfield who sold the stake to Spanish American Mines Limited, who subsequently sold them to Northspan Uranium Mines Ltd, a Rio Tino subsidiary. [ 4 ]
The site had two shafts that are 3,200 and 3,400 feet deep and an ore processing plant with 2,000 tons per day capacity. Production started in May 1958. A 1957 report estimated 6,251,726 tons of ore with an average grade of 0.097% uranium oxide. [ 4 ] Tailings were deposited in Northspan Lake. [ 16 ]
The mine closed in 1959 due to water ingress after only 79,000 tons of ore were extracted. [ 10 ] It was decommissioned from 1992 to 1996. 0.5Mt of tailings remain on site, underwater covering 13.2 hectares. [ 12 ]
Stanleigh Mine is located 2 miles northeast of Elliot Lake and was first staked by H. S. Strouth, the chief of mining of Standard Ore and Alloys Corporation, later Stanleigh Uranium Mining Corporation. Ownership was subsequently transferred to Miliken Lake Uranium Mines and Northspan Uranium Mines Limited (who owned Lacnor Mine). [ 4 ]
Two shafts were started in April 1956 to a depth of 3,415 and 3,690 feet deep, the deepest of all shafts in the Elliot Lake group of mines. [ 4 ] Tailings were deposited in Crotch Lake. [ 16 ]
The mine closed in 1960, but reopened from 1983 until 1996. [ 12 ] In August 1993, a power failures resulted in a 2 million liter spill of contaminated water from the mine into McCabe Lake. The Atomic Energy Control Board laid two charges against Rio Algom. [ 19 ] It was decommissioned from 1997 until 2000. [ 12 ] 20.5 Mt of tailings remain on site under water coving an area of 376.5 hectares. [ 12 ]
In 2017, the Canadian Nuclear Safety Commission found owners Rio Algom to be operating the mine "below expectations" due to radium releases from the decommissioned mine's effluent treatment plant that exceeded allowable limits specified in the operators license. [ 14 ]
Stanrock Mine is located 14 miles from Elliot Lake on the south side of Quirke Lake. [ 4 ] The site is adjacent to the Can-Met Mine to the east, the Spanish-American Mine to the west, and Denison Mine to the north. [ 4 ] The site was initially known as the Z-7 group and owned by Zenmac Metal Mines Ltd, who sold it to the US Stancan Uranium Mines Limited in 1954. In 1995 and 1996 the new owners found uranium via diamond drilling and creating a processing plant with a 3,300 tons per day capacity. 1956 estimates of ore reserves were 5,077,800 tons with a grade of 0.109% uranium oxide with probably 4 million additional tons unconfirmed. [ 4 ]
Tailings were deposited in the naturally occurring basin south of the mill, along with the tailings of Can-Met mine. [ 16 ] Six million tons of tailings remain on site. [ 14 ] The mine was decommissioned by Denison Mines in 1999. [ 14 ]
The health of the watershed in the area deteriorated as mining started. Trout from nearby lakes released an odour when cooked and female fish stopped releasing eggs. [ 20 ] Fishing remained permitted at both Quirke Lake and Whiskey Lake, despite the radioactivity in them exceeding levels deemed tolerable by the Ontario Waterways Commission. [ 20 ] Terry Jacobs, an elder of the Serpent River First Nation, told Anishinabek News in 2022 that pollution from the mines reduced the number of animals in the area. Other community members reported sulphur fires, dangerous sulphuric dust burning roofs, breathing difficulties, and skin rashes on children who swam in the rivers. [ 20 ] By 1976, 20 years after the start of mining, Health and Welfare Canada advised local residents to stop drinking water from local rivers. [ 20 ] In 1987, band member Gertrude Lewis requested action from the Government of Canada to clean up the pollution, but the request was rejected. [ 20 ]
Just before Canada Day 1988, the Serpent River nation transported waste from the mines to the TransCanada Highway. On July 20, 1988, the Government of Canada agreed to construct a treatment plant. [ 20 ]
The 2022 book Serpent River Resurgence by Lianne C. Leddy documents the impacts of uranium mining on Serpent River First Nation. [ 21 ]
102 million tonnes of tailings remain on eights decommissioned mines coving an area of 920 hectares. [ 12 ] Rio Algom (a BHP subsidiary) and Denison Mines are both licensed by the Canadian Nuclear Safety Commission to operate the decommissioned mines. [ 22 ]
Results from 2015 and 2018 independent environmental monitoring , commissioned by the Canadian Nuclear Safety Commission, report no expected environmental impacts. [ 22 ] 2021 reports from the Serpent River First nation report the environmental damage as ongoing, with members unable to use their land or eat local fish. [ 8 ]
There are twelve decommissioned uranium mines around Elliot lake, ten of which have tailings on site. [ 2 ]
(Decimal)
(Megatonnes)
(Hectares)
1979–1990
1968–1990
1983–1996
*Combined total for Nordic and Buckles
**Unknown or unclear
According to a 2012 study published in Nature , there is a "positive exposure-response between silica and lung cancer". [ 24 ]
Uranium mining around Elliot Lake produced silica-laden dust at a free silica rate of 60–70%. [ 25 ] : 36
By the early 1970s, miners were unionized via the United Steelworkers and were growing increasingly concerned about the prevalence of cancers and poor support for sick workers by mine owners. [ 26 ]
In 1974 union representatives learned of learned about a paper presented by the Ontario Ministry of Health that contained details about cancer risks to uranium miners, that had not been shared with the miners. [ 26 ]
Approximately 1,000 miners who worked at Denison Mine went on a wildcat strike on the 18 April 1974. Ten days later Denison Mines agreed to improve conditions and the Ontario Premier commissioned James Milton Ham to lead a Royal Commission on the Health and Safety of Workers in Mines. [ 27 ]
The same year, the Ontario Workmen's Compensation Board studied 15,094 people who worked in the uranium mines around Elliot Lake and Bancroft for at least one month, between 1955 and 1974. Of those 15,094 people, 94 silicosis cases were found in 1974, of which 93 were attributed to working in an Elliot Lake mine. [ 25 ] : 43, 62, 108 According to the Committee on Uranium Mining in Virginia, mines produce radon gas which can increase lung cancer risks. [ 28 ] Miners' exposure to radiation was not measured before 1958 and exposure limits were not enacted until 1968. Risks to miners were investigated and the official report of that investigation quotes an Elliot Lake miner: [ 25 ]
"We have been led to believe through the years that the working environment in these mines was safe for us to work in. We have been deceived." [ 25 ] : 77
The aforementioned 1974 study of 15,094 Ontario uranium miners found 81 former miners who died of lung cancer. [ 25 ] : 79 Factoring in predicted lung cancer rate for men in Ontario, led to the conclusion that by 1974 there were 36 more deaths than expected attributable to both Elliot Lake and Bancroft mines, [ 25 ] : 80 with the additional risk appearing to be twice as high for Bancroft miners compared to Elliot Lake miners. [ 25 ] : 348
A study report for the CNSC undertaken by the Occupational Cancer Research Centre at Cancer Care Ontario tracked the health of 28,959 former uranium miners over 21 years and found a two-fold increase in lung cancer mortality and incidence. [ 29 ] : 35 table 4 The BMJ (journal of the British Medical Association) reported an increase of lung cancer risk; miners who have worked at least 100 months in uranium mines have a twofold increased risk of developing lung cancer. [ 30 ] The study is to be updated in 2023. [ 31 ]
Between the minutes opening and 1980, there were 77 fatal workplace safety incidents in the Elliot Lake mines. [ 15 ]
|
https://en.wikipedia.org/wiki/Uranium_mining_in_the_Elliot_Lake_area
|
Uranium rhodium germanium ( URhGe ) is the first discovered metal that becomes superconducting in the presence of an extremely strong electromagnetic field . Very unlike other superconducting materials, whose superconducting properties can be lost due to strong magnetic fields, uranium rhodium germanium actually regains superconducting abilities at about 8 teslas .
URhGe's critical temperature ( T c ) is normally about 280 millikelvins .
The Grenoble team in France , headed by Andrew D. Huxley, first cooled down the sample below its critical temperature and raised the magnetic field to 2 T. As expected, the sample's superconducting properties vanished. However, when the team raised the magnetic field to 8 T, the superconducting behavior continued. The critical temperature at that field strength increased to about 400 millikelvins. The sample retained the superconducting state until 13 T. They also found that at 12 T, the URhGe sample experienced a magnetic phase transition .
|
https://en.wikipedia.org/wiki/Uranium_rhodium_germanium
|
Uranium ruthenium silicide ( URu 2 Si 2 ) is a heavy fermion alloy composed of uranium , ruthenium , and silicon . URu 2 Si 2 has the same '122' tetragonal crystal structure as many other compounds of present condensed matter research. URu 2 Si 2 is a superconductor with a hastatic order (HO) phase below a temperature of 17.5 K . [ 1 ] [ 2 ] Below this temperature, it is magnetic, and below about 1.5 K it superconducts. [ 3 ] However, the nature of the ordered phase below 17.5K is still under debate despite a wide variety of scenarios that have been proposed to explain this phase.
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Uranium_ruthenium_silicide
|
Uranium tailings or uranium tails are a radioactive waste byproduct ( tailings ) of conventional uranium mining and uranium enrichment . They contain the radioactive decay products from the uranium decay chains , mainly the U-238 chain, and heavy metals. Long-term storage or disposal of tailings may pose a danger for public health and safety.
Uranium mill tailings are primarily the sandy process waste material from a conventional uranium mill. [ 1 ] Milling is the first step in making fuel for nuclear reactors from natural uranium ore. The uranium extract is transformed into yellowcake . [ 2 ]
The raw uranium ore is brought to the surface and crushed into a fine sand. The valuable uranium -bearing minerals are then removed via heap leaching with the use of acids or bases , and the remaining radioactive sludge, called "uranium tailings", is stored in huge impoundments. A short ton (907 kg) of ore yields one to five pounds (0.45 to 2.3 kg) of uranium depending on the uranium content of the mineral. [ 3 ] Uranium tailings can retain up to 85% of the ore's original radioactivity. [ 4 ]
The tailings contain mainly decay products from the decay chain involving Uranium-238 . [ 1 ] Uranium tailings contain over a dozen radioactive nuclides, which are the primary hazard posed by the tailings. The most important of these are thorium-230 , radium-226 , radon-222 (radon gas) and the daughter isotopes of radon decay, including polonium-210 . All of those are naturally occurring radioactive materials or "NORM".
Tailings contain heavy metals and radioactive radium . Radium then decays over thousands of years and radioactive radon gas is produced. Tailings are kept in piles for long-term storage or disposal and need to be maintained and monitored for leaks over the long term. [ 2 ]
If uranium tailings are stored aboveground and allowed to dry out, the radioactive sand can be carried great distances by the wind, entering the food chain and bodies of water. The danger posed by such sand dispersal is uncertain at best given the dilution effect of dispersal. The majority of tailing mass will be inert rock, just as it was in the raw ore before the extraction of the uranium, but physically altered, ground up, mixed with large amounts of water and exposed to atmospheric oxygen, which can substantially alter chemical behaviour.
An EPA estimate of risk based on uranium tailings deposits existing in the United States in 1983 gave the figure of 500 lung cancer deaths per century if no countermeasures are taken. [ 5 ]
This radioactivity –related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Uranium_tailings
|
Uranium tetrafluoride is the inorganic compound with the formula UF 4 . It is a green solid with an insignificant vapor pressure and low solubility in water . Uranium in its tetravalent ( uranous ) state is important in various technological processes. In the uranium refining industry it is known as green salt . [ 1 ]
UF 4 is prepared from UO 2 in a fluidized bed by reaction with Hydrogen fluoride . The UO 2 is derived from mining operations. Around 60,000 tonnes are prepared in this way annually. A common impurity is UO 2 F 2 . UF4 is susceptible to hydrolysis as well. [ 1 ]
UF 4 is formed by the reaction of UF 6 with hydrogen gas in a vertical tube-type reactor.
The bulk density of UF 4 varies from about 2.0 g/cm 3 to about 4.5 g/cm 3 depending on the production process and the properties of the starting uranium compounds.
A molten salt reactor design, a type of nuclear reactor where the working fluid is a molten salt , would use UF 4 as the core material. UF 4 is generally chosen over related compounds because of the usefulness of the elements without isotope separation , better neutron economy and moderating efficiency, lower vapor pressure and better chemical stability.
Uranium tetrafluoride reacts stepwise with fluorine, first to give uranium pentafluoride and then volatile UF 6 :
UF 4 is reduced by magnesium to give the metal: [ 2 ]
UF 4 reacts slowly with moisture at ambient temperature, forming UO 2 and HF.
Like most binary metal fluorides , UF 4 is a dense highly crosslinked inorganic polymer . As established by X-ray crystallography , the U centres are eight-coordinate with square antiprismatic coordination spheres. The fluoride centres are doubly bridging . [ 2 ] [ 3 ]
Like all uranium salts, UF 4 is toxic and thus harmful by inhalation, ingestion, and through skin contact.
|
https://en.wikipedia.org/wiki/Uranium_tetrafluoride
|
Planetary symbols are used in astrology and traditionally in astronomy to represent a classical planet (which includes the Sun and the Moon) or one of the modern planets. The classical symbols were also used in alchemy for the seven metals known to the ancients , which were associated with the planets , and in calendars for the seven days of the week associated with the seven planets. The original symbols date to Greco-Roman astronomy ; their modern forms developed in the 16th century, and additional symbols would be created later for newly discovered planets.
The seven classical planets, their symbols, days and most commonly associated planetary metals are:
The International Astronomical Union (IAU) discourages the use of these symbols in modern journal articles, and their style manual proposes one- and two-letter abbreviations for the names of the planets for cases where planetary symbols might be used, such as in the headings of tables. [ 1 ] The modern planets with their traditional symbols and IAU abbreviations are:
The symbols of Venus and Mars are also used to represent female and male in biology following a convention introduced by Carl Linnaeus in the 1750s.
The origins of the planetary symbols can be found in the attributes given to classical deities. The Roman planisphere of Bianchini (2nd century, currently in the Louvre , inv. Ma 540) [ 2 ] shows the seven planets represented by portraits of the seven corresponding gods, each a bust with a halo and an iconic object or dress, as follows: Mercury has a caduceus and a winged cap; Venus has a necklace and a shining mirror; Mars has a war-helmet and a spear; Jupiter has a laurel crown and a staff; Saturn has a conical headdress and a scythe; the Sun has rays emanating from his head; and the Moon has a crescent atop her head.
The written symbols for Mercury, Venus, Jupiter, and Saturn have been traced to forms found in late Greek papyri. [ 3 ] [ b ]
Early forms are also found in medieval Byzantine codices which preserve horoscopes. [ 4 ]
A diagram in the astronomical compendium by Johannes Kamateros (12th century) closely resembles the 11th-century forms shown above, with the Sun represented by a circle with a single ray, Jupiter by the letter zeta (the initial of Zeus , Jupiter's counterpart in Greek mythology), Mars by a round shield in front of a diagonal spear, and the remaining classical planets by symbols resembling the modern ones, though without the crosses seen in modern versions of Mercury, Venus, Jupiter and Saturn. [ citation needed ] These crosses first appear in the late 15th or early 16th century. According to Maunder, the addition of crosses appears to be "an attempt to give a savour of Christianity to the symbols of the old pagan gods." [ 5 ] The modern forms of the classical planetary symbols are found in a woodcut of the seven planets in a Latin translation of Abu Ma'shar al-Balkhi 's De Magnis Coniunctionibus printed at Venice in 1506, represented as the corresponding gods riding chariots. [ 6 ]
Earth is not one of the classical planets, as "planets" by definition were "wandering stars" as seen from Earth's surface.
Earth's status as planet is a consequence of heliocentrism in the 16th century.
Nonetheless, there is a pre-heliocentric symbol for the world, now used as a planetary symbol for the Earth. This is a circle crossed by two lines, horizontal and vertical, representing the world divided by four rivers into the four quarters of the world (often translated as the four "corners" of the world): . A variant, now obsolete, had only the horizontal line: . [ 7 ]
A medieval European symbol for the world – the globus cruciger , (the globe surmounted by a Christian cross ) – is also used as a planetary symbol; it resembles an inverted symbol for Venus.
The planetary symbols for Earth are encoded in Unicode at U+1F728 🜨 ALCHEMICAL SYMBOL FOR VERDIGRIS and U+2641 ♁ EARTH .
The crescent shape has been used to represent the Moon since antiquity. In classical antiquity, it is worn by lunar deities ( Selene/Luna , Artemis/Diana , Men , etc.) either on the head or behind the shoulders, with its horns pointing upward.
The representation of the moon as a simple crescent with the horns pointing to the side (as a heraldic crescent increscent or crescent decrescent ) is attested from late Classical times.
The same symbol can be used in a different context not for the Moon itself but for a lunar phase , as part of a sequence of four symbols
for "new moon" (U+1F311 🌑︎), "waxing" (U+263D ☽︎), "full moon" (U+1F315 🌕︎) and "waning" (U+263E ☾︎).
The symbol ☿ for Mercury is a caduceus (a staff intertwined with two serpents), a symbol associated with Mercury / Hermes throughout antiquity. Some time after the 11th century, a cross was added to the bottom of the staff to make it seem more Christian. [ 3 ]
The ☿ symbol has also been used to indicate intersex , transgender , or non-binary gender . [ 8 ] A related usage is for the 'worker' or 'neuter' sex among social insects that is neither male nor (due to its lack of reproductive capacity) fully female, such as worker bees . [ 9 ] It was also once the designated symbol for hermaphroditic or 'perfect' flowers , [ 10 ] but botanists now use ⚥ for these. [ 11 ]
Its Unicode codepoint is U+263F ☿ MERCURY .
The Venus symbol , ♀, consists of a circle with a small cross below it.
It has been interpreted as a depiction of the hand-mirror of the goddess, which may also explain Venus's association with the planetary metal copper, as mirrors in antiquity were made of polished copper, [ 12 ] [ d ] though this is not certain. [ 3 ] In the Greek Oxyrhynchus Papyri 235 , the symbols for Venus and Mercury did not have the cross on the bottom stem, [ 3 ] and Venus appears without the cross (⚲) in Johannes Kamateros (12th century). [ citation needed ]
In botany and biology , the symbol for Venus is used to represent the female sex , alongside the symbol for Mars representing the male sex, [ 13 ] following a convention introduced by Linnaeus in the 1750s. [ 10 ] [ e ] Arising from the biological convention, the symbol also came to be used in sociological contexts to represent women or femininity . This gendered association of Venus and Mars has been used to pair them heteronormatively , describing women and men stereotypically as being so different that they can be understood as coming from different planets, an understanding popularized in 1992 by the book titled Men Are from Mars, Women Are from Venus . [ 14 ] [ 15 ]
Unicode encodes the symbol as U+2640 ♀ FEMALE SIGN , in the Miscellaneous Symbols block. [ f ]
The modern astronomical symbol for the Sun, the circumpunct ( U+2609 ☉ SUN ), was first used in the Renaissance . It possibly represents Apollo's golden shield with a boss ; it is unknown if it traces descent from the nearly identical Egyptian hieroglyph for the Sun.
Bianchini's planisphere , produced in the 2nd century, shows a circlet with rays radiating from it. [ 5 ] [ 2 ] In late Classical times, the Sun is attested as a circle with a single ray. A diagram in Johannes Kamateros' 12th century Compendium of Astrology shows the same symbol. [ 18 ] This older symbol is encoded by Unicode as U+1F71A 🜚 ALCHEMICAL SYMBOL FOR GOLD in the Alchemical Symbols block. Both symbols have been used alchemically for gold, as have more elaborate symbols showing a disk with multiple rays or even a face.
The Mars symbol , ♂, is a depiction of a circle with an arrow emerging from it, pointing at an angle to the upper right in Europe and to the upper left in India. [ 19 ] [ 20 ] It is also the old and obsolete symbol for iron in alchemy. In zoology and botany, it is used to represent the male sex (alongside the astrological symbol for Venus representing the female sex), [ 13 ] following a convention introduced by Linnaeus in the 1750s. [ 10 ]
The symbol dates from at latest the 11th century, at which time it was an arrow across or through a circle, thought to represent the shield and spear of the god Mars; in the medieval form, for example in the 12th-century Compendium of Astrology by Johannes Kamateros, the spear is drawn across the shield. [ 18 ] The Greek Oxyrhynchus Papyri show a different symbol, [ 3 ] perhaps simply a spear. [ 2 ]
Its Unicode codepoint is U+2642 ♂ MALE SIGN ( ♂ ).
The symbol for Jupiter , ♃, was originally a Greek zeta, Ζ , with a stroke indicating that it is an abbreviation (for Zeus , the Greek equivalent of Roman Jupiter).
Its Unicode codepoint is U+2643 ♃ JUPITER .
Salmasius and earlier attestations show that the symbol for Saturn, ♄, derives from the initial letters ( Kappa , rho ) of its ancient Greek name Κρόνος ( Kronos ), with a stroke to indicate an abbreviation . [ 10 ] By the time of Kamateros (12th century), the symbol had been reduced to a shape similar to a lower-case letter eta η, with the abbreviation stroke surviving (if at all) in the curl on the bottom-right end.
Its Unicode codepoint is U+2644 ♄ SATURN .
The symbols for Uranus were created shortly after its discovery in 1781. One symbol, ⛢, invented by J. G. Köhler and refined by Bode , was intended to represent the newly discovered metal platinum ; since platinum, commonly called white gold, was found by chemists mixed with iron, the symbol for platinum combines the alchemical symbols for iron , ♂, and gold , ☉. [ 21 ] [ 22 ] Gold and iron are the planetary metals for the Sun and Mars, and so share their symbols. Several orientations were suggested, but an upright arrow is now universal.
Another symbol, , was suggested by Lalande in 1784. In a letter to Herschel , Lalande described it as "a globe surmounted by the first letter of your name". [ 23 ] The platinum symbol tends to be used by astronomers, and the monogram by astrologers. [ 24 ]
For use in computer systems, the symbols are encoded U+26E2 ⛢ ASTRONOMICAL SYMBOL FOR URANUS and U+2645 ♅ URANUS .
Several symbols were proposed for Neptune to accompany the suggested names for the planet. Claiming the right to name his discovery, Urbain Le Verrier originally proposed to name the planet for the Roman god Neptune [ 25 ] and the symbol of a trident , [ 26 ] while falsely stating that this had been officially approved by the French Bureau des Longitudes . [ 25 ] In October, he sought to name the planet Leverrier , after himself, and he had loyal support in this from the observatory director, François Arago , [ 27 ] who in turn proposed a new symbol for the planet, . [ 28 ] However, this suggestion met with resistance outside France, [ 27 ] and French almanacs quickly reintroduced the name Herschel for Uranus , after that planet's discoverer Sir William Herschel , and Leverrier for the new planet, [ 29 ] though it was used by anglophone institutions. [ 30 ] Professor James Pillans of the University of Edinburgh defended the name Janus for the new planet, and proposed a key for its symbol. [ 26 ] Meanwhile, Struve presented the name Neptune on December 29, 1846, to the Saint Petersburg Academy of Sciences . [ 31 ] In August 1847, the Bureau des Longitudes announced its decision to follow prevailing astronomical practice and adopt the choice of Neptune , with Arago refraining from participating in this decision. [ 32 ] The planetary symbol was Neptune's trident , with the handle stylized either as a crossed , following Mercury, Venus, Jupiter, Saturn, and the asteroids, or as an orb , following the symbols for Uranus, Earth, and Mars. [ 7 ] The crossed variant is the more common today.
For use in computer systems, the symbols are encoded as U+2646 ♆ NEPTUNE and U+2BC9 ⯉ NEPTUNE FORM TWO .
Pluto was almost universally considered a planet from its discovery in 1930 until its re-classification as a dwarf planet (planetoid) by the IAU in 2006. Planetary geologists [ 33 ] and astrologers continue to treat it as a planet. The original planetary symbol for Pluto was , a monogram of the letters P and L. Astrologers generally use a bident with an orb. NASA has used the bident symbol since Pluto's reclassification. These symbols are encoded as U+2647 ♇ PLUTO and U+2BD3 ⯓ PLUTO FORM TWO .
In the 19th century, planetary symbols for the major asteroids were also in use, including 1 Ceres (a reaper's sickle , encoded U+26B3 ⚳ CERES ), 2 Pallas (a lance, U+26B4 ⚴ PALLAS ) and 3 Juno (a sceptre, encoded U+26B5 ⚵ JUNO ).
Encke (1850) used symbols for 5 Astraea , 6 Hebe , 7 Iris , 8 Flora and 9 Metis in the Berliner Astronomisches Jahrbuch . [ 34 ]
In the late 20th century, astrologers abbreviated the symbol for 4 Vesta (the sacred fire of Vesta , encoded U+26B6 ⚶ VESTA ), [ 35 ] and introduced new symbols for 5 Astraea ( , a stylised % sign, shift-5 on QWERTY keyboards for asteroid 5), 10 Hygiea encoded U+2BDA ⯚ HYGIEA ) [ 36 ] and for 2060 Chiron , discovered in 1977 (a key, U+26B7 ⚷ CHIRON ). [ 35 ] Chiron's symbol was adapted as additional centaurs were discovered; symbols for 5145 Pholus and 7066 Nessus have been encoded in Unicode. [ 36 ] The abbreviated Vesta symbol is now universal, and the astrological symbol for Pluto has been used astronomically for Pluto as a dwarf planet. [ 37 ]
In the early 21st century, symbols for the trans-Neptunian dwarf planets have been given Unicode codepoints , particularly Eris (the hand of Eris , ⯰, but also ⯱), Sedna , Haumea , Makemake , Gonggong , Quaoar and Orcus which are in Unicode. All (except Eris, for which the hand of Eris is a traditional Discordian symbol) were devised by Denis Moskowitz, a software engineer in Massachusetts. [ 37 ] [ 38 ]
Other symbols have also been invented by Moskowitz, for some smaller TNOs as well as many planetary moons. (Charon in particular coincidentally matches a symbol already existing in Unicode as an astrological Pluto.) However, these have not been broadly adopted. [ 37 ] [ 39 ]
From 1845 to 1855, many symbols were created for newly discovered asteroids. But by 1851, the spate of discoveries had led to a general abandonment of these symbols in favour of numbering all asteroids instead. [ 41 ]
|
https://en.wikipedia.org/wiki/Uranus_symbol
|
Uranyl nitrate is a water-soluble yellow uranium salt with the formula UO 2 (NO 3 ) 2 · n H 2 O . The hexa-, tri-, and dihydrates are known. [ 3 ] The compound is mainly of interest because it is an intermediate in the preparation of nuclear fuels. In the nuclear industry, it is commonly referred to as yellow salt.
Uranyl nitrate can be prepared by reaction of uranium salts with nitric acid . It is soluble in water , ethanol , and acetone . As determined by neutron diffraction , the uranyl center is characteristically linear with short U=O distances. In the equatorial plane of the complex are six U-O bonds to bidentate nitrate and two water ligands. At 245 pm , these U-O bonds are much longer than the U=O bonds of the uranyl center. [ 1 ]
Uranyl nitrate is important for nuclear reprocessing . It is the compound of uranium that results from dissolving the decladded spent nuclear fuel rods or yellowcake in nitric acid, for further separation and preparation of uranium hexafluoride for isotope separation for preparing of enriched uranium . A special feature of uranyl nitrate is its solubility in tributyl phosphate ( PO(OC 4 H 9 ) 3 ), which allows uranium to be extracted from the nitric acid solution. Its high solubility is attributed to the formation of the lipophilic adduct UO 2 (NO 3 ) 2 (OP(OBu) 3 ) 2 . [ citation needed ]
During the first half of the 19th century, many photosensitive metal salts had been identified as candidates for photographic processes , among them uranyl nitrate. The prints thus produced were called uranium prints or uranotypes.
The first uranium printing processes were invented by Scotsman J. Charles Burnett between 1855 and 1857, and used this compound as the sensitive salt. Burnett authored a 1858 article comparing "Printing by the Salts of the Uranic and Ferric Oxides"
The process employs the ability of the uranyl ion to pick up two electrons and reduce to the lower oxidation state of uranium(IV) under ultraviolet light.
Uranotypes can vary from print to print from a more neutral, brown russet to strong Bartolozzi red, with a very long tone grade. Surviving prints are slightly radioactive , a property which serves as a means of non-destructively identifying them.
Several other more elaborate photographic processes employing the compound appeared and vanished during the second half of the 19th century with names like Wothlytype, Mercuro-Uranotype and the Auro-Uranium process. Uranium papers were manufactured commercially at least until the end of the 19th century, vanishing due to the superior sensitivity and practical advantages of silver halides . From the 1930s through the 1950s Kodak Books described a uranium toner (Kodak T-9) using uranium nitrate hexahydrate. [ citation needed ]
Along with uranyl acetate it is used as a negative stain for viruses in electron microscopy ; in tissue samples it stabilizes nucleic acids and cell membranes . [ citation needed ]
Uranyl nitrates are common starting materials for the synthesis of other uranyl compounds because the nitrate ligand is easily replaced by other anions. It reacts with oxalate to give uranyl oxalate . Treatment with hydrochloric acid gives uranyl chloride . [ 4 ]
Uranyl nitrate is an oxidizing and highly toxic compound. When ingested, it causes severe chronic kidney disease and acute tubular necrosis and is a lymphocyte mitogen . Target organs include the kidneys , liver , lungs and brain . It also represents a severe fire and explosion risk when heated or subjected to shock in contact with oxidizable substances. [ citation needed ]
|
https://en.wikipedia.org/wiki/Uranyl_nitrate
|
Uranyl peroxide or uranium peroxide hydrate (UO 4 ·nH 2 O) is a pale-yellow, soluble peroxide of uranium . It is found to be present at one stage of the enriched uranium fuel cycle and in yellowcake prepared via the in situ leaching and resin ion exchange system. This compound, also expressed as UO 3 ·(H 2 O 2 )·(H 2 O), is very similar to uranium trioxide hydrate UO 3 · n H 2 O. The dissolution behaviour of both compounds are very sensitive to the hydration state (n can vary between 0 and 4). One main characteristic of uranium peroxide is that it consists of small needles with an average AMAD of about 1.1 μm.
The uranyl minerals studtite , UO 4 ·4H 2 O, and metastudtite, UO 4 ·2H 2 O, are the only minerals discovered to date found to contain peroxide. The product is a light yellow powder.
In general, uranyl peroxide can be obtained from a solution of uranium(VI) by adding a peroxide, usually hydrogen peroxide solution. The dihydrate is obtained from a boiling solution of uranyl nitrate with the addition of hydrogen peroxide and drying of the precipitate, while the trihydrate is precipitated from a solution of ammonium uranyl oxalate. [ 1 ]
The unit cell consists of uranyl cations coordinated to two water molecules and two peroxide anions. The latter are μ 2 -coordinated to the cation—that is, end-on. Additional water molecules are bound in the crystal by hydrogen bonding . [ 2 ] Only the tetrahydrate has been characterized by X-ray crystallography , but density functional theory offers a good approximation to the dihydrate. [ 3 ]
When uranyl nitrate is dissolved in an aqueous solution of hydrogen peroxide and an alkali metal hydroxide , it forms cage clusters akin to polyoxometalates or fullerenes . [ 4 ] Syntheses also typically add organic materials, such as amines , to serve as templates, akin to zeolites . [ 5 ]
Radiolysis of uranium salts dissolved in water produces peroxides; uranyl peroxide has been studied as a possible end component of spent radioactive waste . [ 6 ]
|
https://en.wikipedia.org/wiki/Uranyl_peroxide
|
The Urbach Energy , or Urbach Edge , is a parameter typically denoted E 0 {\displaystyle E_{0}} , with dimensions of energy , used to quantify energetic disorder in the band edges of a semiconductor . It is evaluated by fitting the absorption coefficient as a function of energy to an exponential function. It is often used to describe electron transport in structurally disordered semiconductors such as hydrogenated amorphous silicon . [ 1 ]
In the simplest description of a semiconductor, a single parameter is used to quantify the onset of optical absorption: the band gap , E G {\displaystyle E_{G}} . In this description, semiconductors are described as being able to absorb photons above E G {\displaystyle E_{G}} , but are transparent to photons below E G {\displaystyle E_{G}} . [ 2 ] However, the density of states in 3 dimensional semiconductors increases further from the band gap (this is not generally true in lower dimensional semiconductors however). For this reason, the absorption coefficient, α {\displaystyle \alpha } , increases with energy. The Urbach Energy quantifies the steepness of the onset of absorption near the band edge, and hence the broadness of the density of states . A sharper onset of absorption represents a lower Urbach Energy.
The Urbach Energy is defined by an exponential increase in absorbance with energy. While an exponential dependence of absorbance had been observed previously in photographic materials, [ 3 ] it was Franz Urbach that evaluated this property systematically in crystals. He used silver bromide for his study while working at the Kodak Company in 1953. [ 4 ]
Absorption in semiconductors is known to increase exponentially near the onset of absorption, spanning several orders of magnitude. [ 5 ] [ 6 ] Absorption as a function of energy can be described by the following equation: [ 1 ] [ 7 ]
α ( E ) = α 0 exp ( E − E 1 E 0 ) {\displaystyle \alpha (E)=\alpha _{0}\exp {\biggl (}{\frac {E-E_{1}}{E_{0}}}{\biggr )}}
where α 0 {\displaystyle \alpha _{0}} and E 1 {\displaystyle E_{1}} are fitting parameters with dimensions of inverse length and energy, respectively, and E 0 {\displaystyle E_{0}} is the Urbach Energy. This equation is only valid when α ∝ exp ( E ) {\displaystyle \alpha \propto \exp(E)} . The Urbach Energy is temperature-dependent. [ 7 ] [ 8 ]
Room temperature values of E 0 {\displaystyle E_{0}} for hydrogenated amorphous silicon are typically between 50 m eV and 150 meV. [ 9 ]
The Urbach Energy is often evaluated to make statements on the energetic disorder of band edges in structurally disordered semiconductors. [ 1 ] The Urbach Energy has been shown to increase with dangling bond density in hydrogenated amorphous silicon [ 9 ] and has been shown to be strongly correlated with the slope of band tails evaluated using transistor measurements. [ 10 ] For this reason, it can be used as a proxy for activation energy , E A {\displaystyle E_{A}} , in semiconductors governed by multiple trapping and release . It is important to state that E 0 {\displaystyle E_{0}} is not the same as E A {\displaystyle E_{A}} , since E A {\displaystyle E_{A}} describes the disorder associated with one band, not both.
To evaluate the Urbach Energy, the absorption coefficient needs to be measured over several orders of magnitude. For this reason, high precision techniques such as the constant photocurrent method (CPM) [ 11 ] or photothermal deflection spectroscopy are used.
|
https://en.wikipedia.org/wiki/Urbach_energy
|
In the solid-state physics of semiconductors , the Urbach tail is an exponential part in the energy spectrum of the absorption coefficient . This tail appears near the optical band edge in amorphous , disordered and crystalline materials.
Researchers began questioning the nature of "tail states" in disordered semiconductors in the 1950s. It was found that such tails arise from the strains sufficient to push local states past the band edges. [ citation needed ]
In 1953, the Austrian-American physicist Franz Urbach (1902–1969) [ 1 ] found that such tails decay exponentially into the gap . [ 2 ] Later, photoemission experiments delivered absorption models revealing temperature dependence of the tail. [ 3 ]
A variety of amorphous crystalline solids expose exponential band edges via optical absorption. The universality of this feature suggested a common cause. Several attempts were made to explain the phenomenon, but these could not connect specific topological units to the electronic structure. [ 4 ] [ 5 ]
This science article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Urbach_tail
|
Urban Ecosystems is an peer-reviewed bimonthly transformative international scientific journal published by Springer . [ 1 ]
The journal is interdisciplinary, with its articles covering relationships "between socioeconomic and ecological structures and processes in urban environments." [ 1 ] Associated with the Society for Urban Ecology, [ 1 ] the journal was established in 1997. [ 2 ] Additionally, the journal undergoes a hybrid publishing method.
This article about an academic journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
|
https://en.wikipedia.org/wiki/Urban_Ecosystems
|
The Urban Traffic Management Control or UTMC programme is the main initiative in the United Kingdom for the development of a more open approach to Intelligent Transport Systems or ITS in urban areas . Originating as a Government research programme, the initiative is now managed by a community forum, the UTMC Development Group, which represents both local transport authorities and the systems industry.
UTMC systems are designed to allow the different applications used within modern traffic management systems to communicate and share information with each other. This allows previously disparate data from multiple sources such as Automatic Number Plate Recognition ( ANPR ) cameras, Variable Message Signs (VMS), car parks, traffic signals, air quality monitoring stations and meteorological data, to be amalgamated into a central console or database. The idea behind UTMC is to maximise road network potential to create a more robust and intelligent system that can be used to meet current and future management requirements.
The UTMC was launched in 1997 by the UK Government's Department for Environment, Transport and the Regions (now the Department for Transport ( DfT )). During the first three years, a number of research projects were undertaken to establish and validate an approach based on modular systems and open standards. These have contributed to the UTMC Technical Specifications, which define UTMC standards .
UTMC has helped local authorities achieve their goals by adopting an appropriate, but not over constraining, set of standards to allow users, suppliers and integrators of UTMC systems to plan and supply systems cost-effectively in an open market. These standards are essential in breaking boundaries and local authority borders to allow network interoperability.
The UTMC Specifications and Standards Group (S&SG) is responsible for ensuring that the UTMC technical framework continues to meet local authorities' needs, currently and in the future. The S&SG oversees the maintenance and upkeep of the UTMC Technical Specifications. Its members are drawn from both local authorities and the supplier community, but it is always led by local authorities.
The S&SG works closely with the full range of UTMC suppliers to ensure its requirements are technically achievable. It operates a transparent consultation regime on all technical changes. From time to time it may commission and fund technical research and standards development activities, though it operates principally through coordinating the input freely provided by suppliers and users.
The Specification provides standards for shared data (i.e. data communicated between applications of a UTMC system, or between a UTMC system and an external system) through:
As well as undertaking technical work to develop national specifications, there are a number of activities that help "market" the initiative to the traffic management community. There is a conference, usually held annually, papers and articles are published in key industry journals and regular workshops are held focusing on key (technical or operational) themes. In 2006, the UTMC community ran a number of special sessions at the ITS World Congress held in London, as well as running a village of suppliers demonstrating UTMC-compatible products.
The UTMC initiative formerly published a Products Catalogue, representing products submitted as compliant by suppliers. This was discontinued in December 2014.
The following documents are maintained and published for open use on the UTMC website.
The current issue of the Technical Specification is available for free download on the UTMC resources website [1] .
Local authorities with UTMC have more control over their road network. Some examples of what they can do are:
Advise By monitoring how long it takes a vehicle to pass two ANPR cameras and then dividing the time by the distance between the cameras, an average speed can be measured and used to inform motorists via VMS how long it will take them to reach a destination, or to set diversions.
Example by Envitia: VMS in Aberdeen [2] . Example by IDT: Journey time monitoring in Birmingham [3]
Warn Wind detectors attached to a bridge give drivers of high sided vehicles warnings before they cross. The warning messages are displayed on VMS signs activated when wind speed thresholds are exceeded.
Example by Siemens: Bridge VMSs offer wind warnings [4] .
Guide By linking parking guidance systems to a common database traffic control room operators can inform motorists via strategic VMS about the current state of car parks; especially useful for special events like carnivals when normal use is exceeded.
Example by Mott MacDonald: Car Park Guidance in Edinburgh [5]
Previously these systems would have been impracticable due to the sheer volumes of data processing and the operator time needed to apply constant manual updates.
The JCG was created in 2004 to bring together the UDG with other key ITS community organisations; it was later expanded to include representation from the Department for Transport and the Highways Agency . The JCG's aim was to ensure that the strategic direction of the various groups and bodies involved in UK ITS was kept aligned.
The JCG was suspended in September 2012, as the prevailing financial conditions had reduced the resource available to its participants.
UTMC builds on a base of mainstream internet protocols , and focusses on defining data structures suitable for exchange between ITS systems and devices. At the time of its origination there were few available international standards to build on, and the research was therefore used to generate many of its own standards. However, for exchange between central systems (for example, B2B data exchange between neighbouring roads authorities), UTMC refers to the specifications of the European project DATEX.
DATEX (as Datex II ) is now being standardized through the European standards agency CEN and UTMC has been involved in a number of European standards-related projects, notably POSSE (Promotion of Open Specifications and Standards in Europe). There is a current workstream within UTMC aiming to align the UTMC Technical Specification more closely with Datex II.
|
https://en.wikipedia.org/wiki/Urban_Traffic_Management_and_Control
|
The Geospatial Professional Network ( GPN ) (formerly Urban and Regional Information Systems Association ( URISA )) is a non-profit association of professionals using geographic information systems (GIS) and other information technologies to solve challenges at all levels of government . [ 1 ] URISA promotes the effective and ethical use of spatial information and technology for the understanding and management of urban and regional systems. [ 2 ]
URISA was formed in 1966, evolving from a loosely associated group of professionals with a common interest in urban planning information systems. The organization emanated from annual conferences held from 1963 through 1966, known then as the Annual Conference on Urban Planning Information Systems and Programs. URISA has since evolved into an international organization supporting professionals with interests in a variety of topics related to development and effective management of geographic information systems (GIS). [ 3 ] [ 4 ] URISA is currently headquartered in Des Plaines, Illinois , where professional staff handle the administrative functions of the association. URISA is currently run by Wendy Nelson. [ 5 ]
In 2024, URISA announced that it would rebrand, and rename, itself the Geospatial Professional Network (GPN). [ 6 ]
URISA is the founding member of the GIS Certification Institute , which administers professional certification for the field. [ 7 ] [ 8 ] [ 9 ] [ 10 ] URISA is also a founding member of the Coalition of Geospatial Organizations (COGO), a coalition of organizations concerned with U.S. national geospatial issues. [ 11 ] GISCorps is a URISA program that provides volunteer GIS services for underdeveloped countries worldwide. [ 12 ] [ 13 ] URISA's proposed GIS Capability Maturity Model (GIS CMM) provides the means for local governments to gauge their progress in achieving GIS operational maturity against a variety of standards and measures. [ 14 ] [ 15 ]
URISA promotes data sharing by government organizations, and has approved policy to reflect those values. [ 16 ] [ 17 ]
URISA hosts a number of conferences [ 18 ] each year including GIS-Pro: URISA's Annual Conference for GIS Professionals; the GIS/CAMA Technologies Conference, co-sponsored by the International Association of Assessing Officers; [ 19 ] the URISA/NENA Addressing Conference, co-sponsored by the National Emergency Number Association; [ 20 ] a biennial GIS in Public Health Conference; Caribbean GIS Conference; and the URISA Leadership Academy, a five-day GIS leadership program.
URISA supports more than two-dozen chapters, primarily across the United States and Canada , with recent expansion into the Caribbean and the United Arab Emirates . [ 21 ]
The URISA Journal [ 22 ] is a quarterly, peer-reviewed, scholarly publication of the organization. [ 23 ] It has a history of open access. URISA additionally maintains a growing publications library. [ 24 ] [ 25 ] [ 26 ] Through an annual competition URISA encourages students to submit a paper for a special section of the URISA Journal . [ 27 ]
URISA also recognizes achievements in the industry through a variety of awards, including Exemplary Systems in Government Awards (ESIG) [ 28 ] the URISA GIS Hall of Fame, [ 29 ] and the Horwood Distinguished Service Award. [ 30 ] [ 31 ]
URISA members are professionals in the spatial data industry working in local, regional, state/provincial, tribal and federal government, academia, the private sector, and non-profit organizations.
|
https://en.wikipedia.org/wiki/Urban_and_Regional_Information_Systems_Association
|
In ecology, urban ecosystems are considered a ecosystem functional group within the intensive land-use biome . They are structurally complex ecosystems with highly heterogeneous and dynamic spatial structure that is created and maintained by humans . They include cities , smaller settlements and industrial areas , that are made up of diverse patch types (e.g. buildings, paved surfaces, transport infrastructure, parks and gardens, refuse areas). Urban ecosystems rely on large subsidies of imported water, nutrients, food and other resources. Compared to other natural and artificial ecosystems human population density is high, and their interaction with the different patch types produces emergent properties and complex feedbacks among ecosystem components. [ 1 ]
In socioecology , urban areas are considered part of a broader social-ecological system in which urban landscapes and urban human communities interact with other landscape elements. [ 2 ] Urbanization has large impacts on human and environmental health , and the study of urban ecosystems has led to proposals for sustainable urban designs and approaches to development of city fringe areas that can help reduce negative impact on surrounding environments and promote human well-being. [ 3 ]
Urban ecology is a relatively new field. Because of this, the research that has been done in this field has yet to become extensive. While there is still plenty of time for growth in the research of this field, there are some key issues and biases within the current research that still need to be addressed.
The article “A Review of Urban Ecosystem Services: Six Key Challenges for Future Research'' addresses the issue of geographical bias. According to this article, there is a significant geographical bias, “towards the northern hemisphere”. [ 4 ] The article states that case study research is done primarily in the United States and China. It goes on to explain how future research would benefit from a more geographically diverse array of case studies.
“A Quantitative Review of Urban Ecosystem Service Assessments: Concepts, Models, and Implementation” is an article that gives a comprehensive examination of 217 papers written on Urban Ecosystems to answer the questions of where studies are being done, which types of studies are being done, and to what extent do stakeholders influence these studies. [ 5 ] According to this article, "The results indicate that most UES studies have been undertaken in Europe, North America, and China, at city scale. Assessment methods involve bio-physical models, Geographical Information Systems, and valuation, but few study findings have been implemented as land use policy."
“ Urban vacancy and land use legacies: A frontier for urban ecological research, design, and planning” is another scholarly article that gives an insight into the future of urban ecological research. It details an important opportunity for the future of urban ecological researchers that only a few researchers have inquired into so far, the utilization of vacant land for the creation of urban ecosystems. [ 6 ]
Urban ecosystems are complex and dynamic systems that encompass a wide range of living and nonliving components. These components include humans, plants, animals, buildings, transportation systems, and water and energy infrastructure. As the world becomes increasingly urbanized, understanding urban ecosystems and how they function is becoming increasingly important. [ 7 ]
Cities are home to more than half of the world's population, and the number of people living in urban areas is expected to continue to grow in the coming decades. This rapid urbanization can have both positive and negative impacts. On the one hand, cities can provide economic opportunities, access to healthcare and education, and a high quality of life for residents. On the other, increased urbanization exacerbates the struggles of pollution, loss of green spaces, loss of biodiversity, and more. [ 8 ]
In many cities, air pollution levels are well above safe limits, and this can have serious implications for human health. Pollution from vehicles, factories, and power plants can cause respiratory problems, heart disease, and even cancer. In addition to its impact on human health, air pollution can also damage buildings, corrode infrastructure, and harm plant and animal life. [ 8 ]
As cities grow, natural areas such as forests, wetlands, and grasslands are often replaced by buildings, roads, and other forms of development. Lack of urban green spaces contribute to a reduction in air/water quality, mental and physical health of residents, energy efficiency, and biodiversity . [ 9 ]
Related to the dissolution of green space, habitat fragmentation refers to the way in which green spaces get divided by urban development, making it impossible for some species to migrate between. [ 10 ] The process, referred to as Genetic Drift , is essential to maintaining the genetic diversity needed for species survival. [ 11 ]
Species diversity is also impacted by the introduction of non-native and invasive species from travel and shipping processes. Research has found that heavily urbanized areas have a higher richness of invasive species when compared to rural communities. While not all non-native or invasive species are inherently detrimental to a city, invasives can out-compete essential native species , cause biotic homogenization , and introduce new vectors for new diseases. [ 12 ]
Urban Heat Island (UHI) refers to the variation in average temperature that occurs within an urban area due to current methods of development. Patterns in UHIs cause disproportionate impacts of climate change , often creating extra burdens for the already vulnerable. Extreme heat events, which occur more frequently in UHIs, can and do result in deaths, cardiopulmonary diseases, reduced capacity for outdoor labor, mental health concerns, and kidney disease. The demographics most vulnerable to the negative impacts of UHIs are senior citizens, and those without resources to cool off, such as air conditioners. [ 13 ]
Currently methods of urban development increase the risk of disease proliferation within cities as compared to rural environments. Urban traits that contribute to higher risk are poor housing conditions, contaminated water supplies, frequent travel in and out, survival success of rats, and intense population density that causes rapid spread and rapid evolution of the disease. [ 14 ]
Green and blue infrastructure refers to methods of development that work to integrate natural systems and human made structures. Green Infrastructure includes land conservation, such as nature preserves, and increased vegetation cover, such as vertical gardens. Blue infrastructure would include stormwater management efforts such as bioswales . [ 15 ] The process of LEED certification can be used to establish green infrastructure practices in individual buildings. Buildings with LEED certification status report 30% less energy used and economic and mental benefits from natural lighting. [ 16 ]
Beginning in earnest during the 1960, city planning in terms of transit centered around individual car use. [ 17 ] Today, cars are still the most dominant form of transportation in urban areas. One effective solution is an improvement to public transportation. Expanding bus or train routes and switching to clean energy use address the issues of air quality, noise pollution, and socioeconomic equity. [ 18 ]
Another opportunity to reduce carbon emissions and increase population health would be the implementation of the walkable city model in urban planning. A walkable city is strategically planned to reduce distance traveled in order to access resources needed such as food and jobs. [ 19 ]
|
https://en.wikipedia.org/wiki/Urban_ecosystem
|
Urban evolution refers to the heritable genetic changes of populations in response to urban development and anthropogenic activities in urban areas . Urban evolution can be caused by non-random mating, mutation , genetic drift , gene flow , or evolution by natural selection . [ 1 ] In the context of Earth's living history, rapid urbanization is a relatively recent phenomenon, yet biologists have already observed evolutionary change in numerous species compared to their rural counterparts on a relatively short timescale. [ 1 ] [ 2 ]
Strong selection pressures due to urbanization play a big role in this process. Urbanization introduces distinct challenges such as altered microclimates, pollution, habitat fragmentation, and differential resource availability. These changed environmental conditions exert unique selection pressures on their inhabitants, leading to physiological and behavioral adaptations in city-dwelling plant and animal species. [ 3 ] [ 2 ] However, there is also discussion on whether some of these emerging traits are truly a consequence of genetic adaptation, or examples of phenotypic plasticity . There is also a significant change in species composition between rural and urban ecosystems . [ 4 ]
Understanding how anthropogenic activity can influence the traits of other living beings can help humans better understand their effect on the environment, particularly as cities continue to grow. Shared aspects of cities worldwide give ample opportunity for scientists to study the specific evolutionary responses in these rapidly changed landscapes independently. How certain organisms adapt to urban environments while others cannot gives a live perspective on rapid evolution. [ 3 ] [ 2 ]
With urban growth, the urban-rural gradient has seen a large shift in distribution of humans, moving from low density to very high density within the last millennia. This has brought a large change to environments as well as societies. [ 5 ]
Urbanization transforms natural habitats into completely altered living spaces that sustain large human populations. Increasing congregation of humans accompanies the expansion of infrastructure, industry and housing. Natural vegetation and soil are mostly replaced or covered by dense grey materials. Urbanized areas continue to expand both in size and number globally; in 2018, the United Nations estimated that 68% of people globally will live in ever-expanding urban areas by 2050. [ 6 ]
Urbanization intensifies diverse stressors spatiotemporally such that they can act in concert to cause rapid evolutionary consequences such as extinction, maladaptation, or adaptation. [ 7 ] Three factors have come to the forefront as the main evolutionary influencers in urban areas: the urban microclimate , pollution , and urban habitat fragmentation . [ 8 ] These influence the processes that drive evolution, such as natural and sexual selection, mutation , gene flow and genetic drift .
A microclimate is defined as any area where the climate differs from the surrounding area. Modifications of the landscape and other abiotic factors contribute to a changed climate in urban areas. The use of impervious dark surfaces which retain and reflect heat, and human generated heat energy lead to an urban heat island in the center of cities, where the temperature is increased significantly. A large urban microclimate does not only affect temperature, but also rainfall, snowfall, air pressure and wind, the concentration of polluted air, and how long that air remains in the city. [ 9 ] [ 10 ] [ 11 ]
These climatological transformations increase selection pressure on species living in urban areas, driving evolutionary changes. [ 12 ] Certain species have shown to be adapting to the urban microclimate. [ 3 ] [ 2 ]
For example, a research study focused on urban thermal heterogeneity, which can lead to the formation of Urban heat islands, shows how variations in temperature due to urbanization significantly affects Feral pigeons ( Columba livia ) causing changes in their metabolic processes and oxidative stress levels. Specifically, pigeons in hotter areas showed elevated oxidative stress, suggesting that urban heat could compromise their health. [ 13 ]
Many species have evolved over macroevolutionary timescales by adapting in response to the presence of toxins in the environment of the planet. Human activities, including urbanization, have greatly increased selection pressures due to pollution of the environment, climate change , ocean acidification , and other stressors. Species in urban settings must deal with higher concentrations of contaminants than naturally would occur. [ 14 ] [ 15 ]
There are two main forms of pollution which lead to selective pressures: energy or chemical substances. Energy pollution can come in the form of artificial lighting, sounds, thermal changes, radioactive contamination and electromagnetic waves. Chemical pollution leads to the contamination of the atmosphere, the soil, water and food. All these polluting factors pose direct and indirect challenges to species inhabiting urban areas, altering species’ behavior and/or physiology , which in turn can lead to evolutionary changes. [ 16 ]
Air pollution and soil pollution have significant physiological impacts on both wildlife and plants. For urban animals, exposure to pollutants often results in respiratory issues, neurological damage, and skin irritations. Over time, animals may adapt to these stressors through changes in their physiological systems, such as increased lung capacity or more efficient detoxification mechanisms to cope with pollutants. [ 17 ] However, the severity of these adaptations varies across species, with some developing resilience while others face diminished health. The peppered moth ( Biston betularia ) is a classic example of industrial melanism, where moth populations adapted to increased soot and pollutants by evolving darker coloration, which allowed them to better blend into the soot-darkened trees during the industrial revolution [ 18 ] [ 19 ]
For plants, long-term exposure to pollutants like ozone can impair vital structures on their leaves, disrupting gas exchange and reducing growth. Some plants adapt by closing their stomata or producing antioxidants to mitigate the damage, while others are less equipped to cope and show signs of decline. Pollution also alters soil chemistry, affecting nutrient availability and further stressing plant growth. These physiological changes to both flora and fauna influence urban ecosystems, determining which species can survive and reproduce in polluted environments. [ 17 ]
A study on Great tits ( Parus major ) also found that air pollutants, in combination with local tree composition and temperature, affect their nestling physiology. Specifically, antioxidant capacity and fatty acid composition in these birds were influenced by the surrounding environmental conditions, including pollution levels. [ 20 ]
Water pollution is another major concern, to which species living in aquatic habitats, such as fish, can evolve resistance to pollutants. The Atlantic killifish ( Fundulus heteroclitus ) has evolved to resist toxic pollutants like polychlorinated biphenyls (PCBs), commonly found in polluted urban waters. This resistance is thought to be the result of mutations that allow the fish to tolerate high levels of chemicals that would otherwise be lethal. [ 15 ]
Noise pollution , often resulting from traffic, construction, and industrial activities, is another form of energy pollution that significantly affects urban species. Prolonged exposure to high noise levels can interfere with animals' communication, navigation, feeding behaviors, and stress response mechanisms. In particular, birds are sensitive to noise pollution, as it disrupts their ability to communicate using signals, such as calls from potential mates or warnings of predators. This disruption can lead to changes in behavior, reproduction, and survival. [ 21 ]
The fragmentation of previously intact natural habitats into smaller pockets which can still sustain organisms leads to selection and adaptation of species. These new urban patches, often called urban green spaces, come in all shapes and sizes ranging from parks, gardens, plants on balconies, to the breaks in pavement and ledges on buildings. The diversity in habitats leads to adaptation of local organisms to their own niche. [ 22 ] And contrary to popular belief, there is higher biodiversity in urban areas than previously believed. This is due to the numerous microhabitats. These remnants of wild vegetation or artificially created habitats with often exotic plants and animals all support different kinds of species, which leads to pockets of diversity inside cities. [ 23 ]
With habitat fragmentation also comes genetic fragmentation; genetic drift and inbreeding within small isolated populations results in low genetic variation in the gene pool. Low genetic variation is generally seen as bad for chances of survival. This is why probably some species aren’t able to sustain themselves in the fragmented environments of urban areas. [ 24 ]
Urban environments create new selection pressures for species, leading to rapid adaptations. Species may experience changes in behavior, morphology, or physiology due to altered resources, human-induced pollution, and fragmented habitats. For instance, city-dwelling animals like birds may evolve shorter wings to better navigate between buildings, or insects might develop resistance to pesticides commonly used in urban settings. Urban heat islands are another factor contributing to urban evolution. Cities tend to be warmer than surrounding rural areas, causing species to adapt to higher temperatures. some insects have been observed to become more heat-tolerant over time. Pollution and light exposure also play a significant role. Many species must adapt to high levels of pollution in cities or artificial light that disrupts their natural behaviors. example birds in cities often start singing earlier in the morning due to the prevalence of artificial lighting, which can affect their mating patterns. Fragmentation of habitats has led to the creation of micro-habitats within cities, which act as isolated evolutionary zones. Species in these fragmented areas often experience unique evolutionary pressures, leading to genetic drift and divergence from rural populations.
In one study, researchers examined how early life experiences, particularly adverse conditions, influence behavior in European starlings (Sturnus vulgaris) . The study specifically explored how early life adversity—such as nutritional stress or challenging environmental conditions—may trigger adaptive behaviors in the starlings, including increased foraging and actively seeking out information later in life. The birds were found to be more efficient at locating food and gathering relevant information from their surroundings, suggesting that early adversity may encourage greater exploration and resource acquisition strategies as an adaptive response to uncertainty. [ 25 ]
Their findings imply that animals experiencing early adversity in fragmented environments may develop enhanced abilities to locate and exploit scattered resources. This may help explain why some species, such as starlings, are able to persist and even thrive in urban settings despite habitat degradation. Fragmented urban habitats tend to be more unpredictable, with food sources often patchy and habitats divided. [ 26 ] In such environments, animals that have faced early adversity may become more adept at navigating these challenges. Just as the starlings in the study displayed increased cognitive flexibility in their foraging and information-gathering behaviors, animals in urban ecosystems may also adopt similar strategies to cope with the effects of habitat fragmentation. Cognitive flexibility enables animals to adapt to fluctuating conditions, such as changes in food availability or alterations to shelter and nesting sites, which are common in urbanized landscapes. [ 27 ]
Urbanization often leads to changes in the availability and distribution of food, water, and shelter, prompting behavioral, physiological, and morphological adaptations in species that can exploit new resource environments. Resource availability also acts as a selective force in urban evolution, influencing the survival and reproductive success of species living in cities. Urban areas offer a distinctive array of resources, including food sources like garbage, human waste, and crops, often differing in quantity and quality from those found in natural habitats. These variations can create evolutionary pressures on local populations. [ 28 ] This can be seen in the New York City white-footed mice ( Peromyscus leucopus ) as its tooth rows adapt a structure that can chew on the foods and resources available.
Urban Raccoons (Procyon lotor) have also adapted to urban environments by exploiting food sources like garbage, pet food, and bird feeders. [ 29 ] These animals have developed more adaptable foraging behaviors and are known to thrive in cities due to the abundance of easily accessible food. A recent study reveals the urban raccoons ability to solve foraging challenges, demonstrating innovative problem-solving skills. The research showed that raccoons use puzzle boxes with different difficulty levels to obtain food, with some raccoons learning to solve increasingly complex tasks. The study found that younger raccoons, who were more willing to take risks, were more successful at solving the puzzles. This study shows how raccoons adapt to urban environments through learning and behavioral flexibility, and suggests that finding ways to find resources drive these cognitive adaptations. [ 30 ]
The urban environment imposes different selection pressures than the typical natural setting. [ 7 ] These stressors elicit phenotypic changes in populations of organisms which may be due to phenotypic plasticity—the ability of individual organisms to express different phenotypes from the same genotype as a result of exposure to different environmental conditions—or actual genetic changes.
Mutations are genotypic changes that may result in changes in phenotype , altering the observable traits of an organism and thus potentially its interactions or relationship with its environment. Mutations produce genetic variation which can be acted upon by evolutionary processes such as natural selection. For evolution to occur through natural selection, there must be genetic variation within a population, differential survival as a consequence of the genetic variation, and selective pressure from the environment towards particular desirable or undesirable traits.
Thus, in considering the examples of urban evolution, observed phenotypic divergences or differences in response to urbanization have to be genetically based and increase fitness in that particular environment to be tagged as evolution and adaptation, respectively. Hence, it will be appropriate to consider neutral, or non-adaptive, and adaptive urban evolution, with the later needing to be sufficiently proven. [ 7 ]
Although there is widespread agreement that adaptation is occurring in urban populations , there are few completely proven examples of evolution – almost all are cases of selection, reasoned speculation connecting to adaptive benefit, but insufficient evidence of genetically based, actual adaptive phenotype . [ 7 ] At this time the following examples are sufficiently demonstrated:
Other claimed examples of adaptation indicative of potential urban evolution include:
It is important to note that while these examples show genetic change and/or adaptation, they are not completely proven to be examples of evolution, whether due to insufficient evidence of heritability, or being a possible result of something else, such as plasticity, or because of insufficient evidence.
Some interesting cases of possible adaptation which remain insufficiently proven are:
In one case selection is widely expected to occur and yet is not found:
Evolution is not strictly the result of natural selection and beneficial adaptation. Evolution may also result from genetic drift due to population bottlenecks . In a population bottleneck, the population size is reduced randomly and significantly; there is no selection and therefore random alleles may be kept whereas others decreased in the population. The bottlenecked population may thus show different allele frequencies and phenotypic frequencies than the original population.
A population bottleneck may arise from anthropogenic factors common in urban areas, such as habitat fragmentation from abundant infrastructure. Habitat fragmentation may also lead to reduction in gene flow , further isolating populations of the same species from one another. Cities have been found to both increase genetic drift and decrease gene flow. [ 1 ] In an overview of 167 different studies, over 90% indicated a correlation between genetic drift, gene flow, and urbanization. [ 50 ] This genetic isolation of urban populations can result in divergence from the original and rural populations of the same species, leading to nonadaptive evolution.
An example of nonadaptive change related to genetic drift and gene flow is the burrowing owl ( Athene cunicularia ) in urban Argentina. Each of the three studied cities was independently colonized by a unique population of owls, and there was minimal gene flow between urban owls and those of nearby rural populations. Moreover, there was no gene flow between the owl populations of the three different cities. Gene sequencing revealed that there was less variation present in single nucleotide polymorphisms (SNPS) in urban populations relative to rural populations, and the different cities had different rare SNPS. [ 51 ] The different urban populations were genetically isolated from each other and exhibited genetic divergence when compared to both other urban populations and rural populations. This was also seen in New York City white-footed mice. Urbanization limited their habitat to predominantly city parks, and the independent city park populations were genetically discrete. [ 52 ]
When species show apparent adaptation to an urban or other environment, that adaptation is not necessarily a consequence of evolution, or even genetic change. One genotype may be able to produce various phenotypes adaptive to different environmental conditions. In other words, divergent observable traits may arise from one set of genes and therefore, genetic change did not occur to produce these traits, and evolution did not occur. However, genetic evolution, phenotypic plasticity, and even other factors such as learning may all contribute in varying degrees to form the apparent phenotypic difference.
For example, when 3,768 bird species were assessed in multiple urban environments, it was determined that urban species are generally smaller in size, occupy less specific niches, live longer, have more eggs, and are less aggressive in defending territory. [ 53 ] While there are statistically significant differences between the urban and rural birds of various species, this cannot be assumed to be purely genetic, especially since this study did not explore the potential genetic background of the phenotypic variations.
Another study examines how urbanization influences plant responses to herbivory, using the common dandelion ( Taraxacum officinale ) along an urbanization gradient. Plants from different urban, suburban, and rural areas were raised under similar conditions and exposed to herbivory (locust grazing). While all plants increased their resistance to herbivores with repeated exposure, urban plants showed reduced early seed production compared to rural and suburban plants. [ 54 ] This study suggests that urbanization affects plant defenses and fitness, with urban populations showing different reaction norms in response to herbivory.
A more specific example of phenotypic plasticity is behavioral plasticity, which is often observed in urban areas. In the dark-eyed junco ( Junco hyemalis ), it was determined that phenotypic plasticity was in part responsible for the differential nesting behaviors of urban dwellers. [ 55 ] In order to adapt to the noise pollution abundantly present in urbanized areas, city-dwelling dark-eyed junco birds utilized higher frequency songs to communicate with one another relative to rural birds. It was determined that even in experimental conditions the birds from urbanized areas continued to sing at louder frequencies even without noise present. While this could have been indicative of a genetic basis and thus evolution, it was also observed that prior to capture, birds would exhibit sharing of song with one another. The higher frequency song in the captured experimental population could have therefore been a result of learning from other birds. However, the birds also show significant genetic variation in multiple traits related to reproductive and endocrine systems. [ 56 ] This example shows demonstrates the complex interrelation between genetic change, phenotypic and behavioral plasticity, adaptation, and learning in the formation of a novel or changed phenotype.
As a region urbanizes the species composition generally undergoes change. The new conditions associated with urban infrastructure, air and noise pollution, habitat fragmentation, differential food availability, humans and cars, and so on may be difficult for certain species to adapt to. In birds, for instance, rare species generally disappear in urban areas, with species that are more adaptable tend to dominate. This results in homogenization . [ 57 ] In plants, urbanization reduces species richness and introduces homogeny. It also decreases the amount of pollinators, which may increase reproductive difficulty. [ 58 ]
|
https://en.wikipedia.org/wiki/Urban_evolution
|
Urban prairie (or urban grassland ) is vacant urban land that has reverted to green space . [ 1 ] The definition can vary across countries and disciplines, but at its broadest encompasses meadows, lawns, and gardens, as well as public and private parks, vacant land, remnants of rural landscapes, and areas along transportation corridors. [ 2 ] If previously developed, structures occupying the urban lots have been demolished, leaving patchy areas of green space that are usually untended and unmanaged, forming an involuntary park . Spaces can also be intentionally created to facilitate amenities, such as green belts , community gardens and wildlife reserve habitats. [ 3 ]
Urban brownfields are contaminated grasslands that also fall under the urban grassland umbrella. Urban greenspaces are a larger category that include urban grasslands in addition to other spaces.
Urban prairies can result from several factors. They can either being land that was previously developed and has since been cleared, or remnants of the natural landscape. In the first case, the value of aging buildings may fall too low to provide financial incentives for their owners to maintain them or seizure by local government as a response to unpaid property taxes . In many cases, cities demolish vacant structures because they pose health and safety threats (such as fire hazards ), or be used as a location for criminal activity .
Areas may be cleared of buildings as part of a revitalization plan with the intention of redeveloping the land. In flood-prone areas, government agencies may purchase developed lots and then demolish the structures to improve drainage during floods. Neighborhoods near major industrial or environmental clean-up sites can be acquired and leveled to create a buffer zone and minimize the risks associated with pollution or industrial accidents. Additionally, residents of the city may fill up the unplanned empty space with urban parks or community gardens. [ 4 ] Governments and non-profit groups can also create community gardens and conservation , to restore or reintroduce a wildlife habitat, help the environment, and educate people about the prairie. [ 5 ] Detroit , Michigan is one particular city that has many urban prairies.
Many studies show urbanization has been linked to a loss of biodiversity . Additionally, remaining urban landscapes are typically unable to support the complex food webs they previously hosted and become novel habitats home to highly adapted alien species , such as rats, cockroaches, and pigeons. [ 6 ] As natural landscapes are replaced with urban ones, the ecosystem services of the area can be diminished. Due to this, in urban areas green spaces and grasslands are even more vital. [ 7 ] These areas not only better human life through providing space for leisure activities, and social interaction, but also direct health benefits such as reducing air pollution. [ 8 ] They also provide homes for important pollinators such as wild bees . [ 9 ] Despite the issues surrounding their cleanup and maintenance, even small urban grasslands can have a big effect ecologically. [ 10 ] In Melbourne, Australia at the Tunnerminnerwait and Maulboyheenner memorial site just three years after being replanted with a variety of native species and receiving upkeep, the green space had increased the number and diversity of insects in the vicinity.
|
https://en.wikipedia.org/wiki/Urban_prairie
|
Urban runoff is surface runoff of rainwater, landscape irrigation, and car washing [ 1 ] created by urbanization . Impervious surfaces ( roads , parking lots and sidewalks ) are constructed during land development . During rain , storms, and other precipitation events, these surfaces (built from materials such as asphalt and concrete ), along with rooftops , carry polluted stormwater to storm drains , instead of allowing the water to percolate through soil . [ 2 ]
This causes lowering of the water table (because groundwater recharge is lessened) and flooding since the amount of water that remains on the surface is greater. [ 3 ] [ 4 ] Most municipal storm sewer systems discharge untreated stormwater to streams , rivers , and bays . This excess water can also make its way into people's properties through basement backups and seepage through building wall and floors.
Urban runoff can be a major source of urban flooding and water pollution in urban communities worldwide.
Water running off impervious surfaces in urban areas tends to pick up gasoline , motor oil , heavy metals , trash , and other pollutants from roadways and parking lots, as well as fertilizers and pesticides from lawns. Roads and parking lots are major sources of polycyclic aromatic hydrocarbons (PAHs), which are created as the byproducts of the combustion of gasoline and other fossil fuels , as well as of the heavy metals nickel , copper , zinc , cadmium , and lead . Roof runoff contributes high levels of synthetic organic compounds and zinc (from galvanized gutters). Fertilizer use on residential lawns, parks and golf courses is a measurable source of nitrates and phosphorus in urban runoff when fertilizer is improperly applied or when turf is over-fertilized. [ 3 ] [ 5 ]
Eroding soils or poorly maintained construction sites can often lead to increased sedimentation in runoff. Sedimentation often settles to the bottom of water bodies and can directly affect water quality. Excessive levels of sediment in water bodies can increase the risk of infection and disease through high levels of nutrients present in the soil. These high levels of nutrients can reduce oxygen and boost algae growth while limiting native vegetation growth, which can disrupt aquatic ecosystems . Excessive levels of sediment and suspended solids have the potential to damage existing infrastructure as well. Sedimentation can increase surface runoff by plugging underground injection systems. Increased sedimentation levels can also reduce storage behind reservoir . This reduction of reservoir capacities can lead to increased expenses for public land agencies while also impacting the quality of water recreational areas. [ 6 ]
Runoff can also induce bioaccumulation and biomagnification of toxins in ocean life. Small amounts of heavy metals are carried by runoff into the oceans, which can accumulate within aquatic animals to cause metal poisoning . This heavy metal poisoning can also affect humans, since ingesting a poisoned animal increases the risk of heavy metal poisoning. [ 7 ] [ 8 ]
As stormwater is channeled into storm drains and surface waters, the natural sediment load discharged to receiving waters decreases, but the water flow and velocity increases. In fact, the impervious cover in a typical city creates five times the runoff of a typical woodland of the same size. [ 9 ] [ clarification needed ]
Overwatering through irrigation by sprinkler may produce runoff reaching receiving waters during low flow conditions. [ 10 ] Runoff carries accumulated pollutants to streams with unusually low dilution ratios causing higher pollutant concentrations than would be found during regional precipitation events. [ 11 ]
Urban runoff is a major cause of urban flooding , the inundation of land or property in a built-up environment caused by rainfall overwhelming the capacity of drainage systems , such as storm sewers . [ 12 ] Triggered by events such as flash flooding , storm surges , overbank flooding, or snow melts , urban flooding is characterized by its repetitive, costly, and systemic impacts on communities, even when not within floodplains or near any body of water. [ 13 ]
There are several ways in which stormwater enters properties : backup through sewer pipes, toilets and sinks into buildings; seepage through building walls and floors; the accumulation of water on the property and in public rights-of-way; and the overflow of water from water bodies such as rivers and lakes. Where properties are built with basements, urban flooding is the primary cause of basement flooding. [ citation needed ]
Urban runoff contributes to water quality problems. In 2009 the US National Research Council published a comprehensive report on the effects of urban stormwater and stated that it continues to be a major contamination source in many watersheds throughout the United States. [ 14 ] : vii The report explained that "...further declines in water quality remain likely if the land-use changes that typify more diffuse sources of pollution are not addressed... These include land-disturbing agricultural, silvicultural, urban, industrial, and construction activities from which hard-to-monitor pollutants emerge during wet-weather events. Pollution from these landscapes has been almost universally acknowledged as the most pressing challenge to the restoration of waterbodies and aquatic ecosystems nationwide." [ 14 ] : 24
The runoff also increases temperatures in streams, harming fish and other organisms. (A sudden burst of runoff from a rainstorm can cause a fish-killing shock of hot water.) Also, road salt used to melt snow on sidewalks and roadways can contaminate streams and groundwater aquifers . [ 15 ]
One of the most pronounced effects of urban runoff is on watercourses that historically contained little or no water during dry weather periods (often called ephemeral streams ). When an area around such a stream is urbanized , the resultant runoff creates an unnatural year-round streamflow that hurts the vegetation, wildlife and stream bed of the waterway. Containing little or no sediment relative to the historic ratio of sediment to water, urban runoff rushes down the stream channel, ruining natural features such as meanders and sandbars , and creates severe erosion—increasing sediment loads at the mouth while severely carving the stream bed upstream. As an example, on many Southern California beaches at the mouth of a waterway, urban runoff carries trash, pollutants, excessive silt, and other wastes, and can pose moderate to severe health hazards.
Because of fertilizer and organic waste that urban runoff often carries, eutrophication often occurs in waterways affected by this type of runoff. After heavy rains, organic matter in the waterway is relatively high compared with natural levels, spurring growth of algae blooms that soon consume most of the oxygen . Once the naturally occurring oxygen in the water is depleted, the algae blooms die, and their decomposition causes further eutrophication. These algae blooms mostly occur in areas with still water, such as stream pools and the pools behind dams , weirs , and some drop structures . Eutrophication usually comes with deadly consequences for fish and other aquatic organisms.
Excessive stream bank erosion may cause flooding and property damage. For many years governments have often responded to urban stream erosion problems by modifying the streams through construction of hardened embankments and similar control structures using concrete and masonry materials. Use of these hard materials destroys habitat for fish and other animals. [ 16 ] Such a project may stabilize the immediate area where flood damage occurred, but often it simply shifts the problem to an upstream or downstream segment of the stream. [ 17 ] See River engineering .
There are many different ways that polluted urban runoff could harm humans, such as by contaminating drinking water, disrupting food sources and even causing parts of beaches to be closed off due to a risk of illness. After heavy rainfall events that cause stormwater overflows, contaminated water can impact waterways in which people recreate or fish, causing the beaches or water-based activities to be closed. This is because the runoff has likely caused a spike in harmful bacterial growth or inorganic chemical pollution in the water. [ citation needed ] The contaminants that we often think of as the most damaging are gasoline and oil spillage, but we often overlook the impact that fertilizers and insecticides have. When plants are watered and fields irrigated, the chemicals that lawns and crops have been treated with can be washed into the water table. The new environments that these chemicals are introduced to suffer due to their presence as they kill native vegetation, invertebrates, and vertebrates. [ citation needed ]
Effective control of urban runoff involves reducing the velocity and flow of stormwater, as well as reducing pollutant discharges. Local governments use a variety of stormwater management techniques to reduce the effects of urban runoff. These techniques, called best management practices for water pollution (BMPs) in some countries, may focus on water quantity control, while others focus on improving water quality, and some perform both functions. [ 18 ]
Pollution prevention practices include low impact development (LID) or green infrastructure techniques - known as Sustainable Drainage Systems (SuDS) in the UK, and Water-Sensitive Urban Design (WSUD) in Australia and the Middle East - such as the installation of green roofs and improved chemical handling (e.g. management of motor fuels & oil, fertilizers, pesticides and roadway deicers ). [ 9 ] [ 19 ] Runoff mitigation systems include infiltration basins , bioretention systems, constructed wetlands , retention basins , and similar devices. [ 20 ] [ 21 ]
Providing effective urban runoff solutions often requires proper city programs that take into account the needs and differences of the community. Factors such as a city's mean temperature, precipitation levels, geographical location, and airborne pollutant levels can all affect rates of pollution in urban runoff and present unique challenges for management. Human factors such as urbanization rates, land use trends, and chosen building materials for impervious surfaces often exacerbate these issues.
The implementation of citywide maintenance strategies such as street sweeping programs can also be an effective method in improving the quality of urban runoff. Street sweeping vacuums collect particles of dust and suspended solids often found in public parking lots and roads that often end up in runoff. [ 22 ]
Educational programs can also be an effective tool for managing urban runoff. Local businesses and individuals can have an integral role in reducing pollution in urban runoff simply through their practices, but often are unaware of regulations. Creating a productive discussion on urban runoff and the importance of effective disposal of household items can help to encourage environmentally friendly practices at a reduced cost to the city and local economy. [ 23 ]
Thermal pollution from runoff can be controlled by stormwater management facilities that absorb the runoff or direct it into groundwater , such as bioretention systems and infiltration basins. Bioretention basins tend to be less effective at reducing temperature, as the water may be heated by the sun before being discharged to a receiving stream. [ 18 ] : p. 5–58
Stormwater harvesting deals with the collection of runoff from creeks, gullies, ephemeral streams, and other ground conveyances. Stormwater harvesting projects often have multiple objectives, such as reducing contaminated runoff to sensitive waters, promoting groundwater recharge, and non-potable applications such as toilet flushing and irrigation . [ 24 ]
|
https://en.wikipedia.org/wiki/Urban_runoff
|
An urban stream is a formerly natural waterway that flows through a heavily populated area . Often times, urban streams are low-lying points in the landscape that characterize catchment urbanization. [ 1 ] Urban streams are often polluted by urban runoff and combined sewer outflows. [ 2 ] Water scarcity makes flow management in the rehabilitation of urban streams problematic. [ 3 ]
Governments may alter the flow or course of an urban stream to prevent localized flooding by river engineering : lining stream beds with concrete or other hardscape materials, diverting the stream into culverts and storm sewers , or other means. Some urban streams, such as the subterranean rivers of London , run completely underground. These modifications have often reduced habitat for fish and other species, caused downstream flooding due to alterations of flood plains , and worsened water quality . [ 4 ]
Toxicants , ionic concentrations, available nutrients , temperature (and light), and dissolved oxygen are key stressors to urban streams. [ 5 ]
Some communities have begun stream restoration projects in an attempt to correct the problems caused by alteration, using techniques such as daylighting and fixing stream bank erosion caused by heavy stormwater runoff. [ 6 ] [ 7 ] Streamflow augmentation to restore habitat and aesthetics is also an option, and recycled water can be used for this purpose. [ 8 ] [ 9 ]
Urban stream syndrome (USS) is a consistent observed ecological degradation of streams caused by urbanization. This kind of stream degradation is commonly found in areas near or in urban areas. USS also considers hydrogeomorphology changes which are characterized by a deeper, wider catchment, reduced living space for biota, and altered sediment transport rates. Keep in mind the status of water quality is difficult to assess in urban areas because of the complexity of the pollutions sources. [ 10 ] This could be from mining and deforestation, but the main cause can be attributed to urban and suburban development. This is because such land use has a domino effect that can be felt tens of kilometers away. Consistent decrease to ecological health of streams can be from many things, but most can be directly or indirectly attributed to human infrastructure and action. Urban streams tend to be "flashier" meaning they have more frequent and larger high flow events. [ 2 ] [ 11 ]
Urban streams also suffer from chemical alterations due to pollutants and waste being uncleanly dumped back into rivers and lakes. An example of this is Onondaga Lake . Historically one of the most polluted freshwater lakes in the world, its salinity and toxic constituents like mercury rose to unsafe levels as large corporations begun to set up shop around the lake. High levels of salinity would be disastrous for any native freshwater marine life and pollutants like mercury are dangerous to most organisms. [ 12 ]
Higher levels of urbanization typically mean a greater presence of urban stream syndrome. [ 13 ]
Hydrology plays a key role in urban stream syndrome. As urbanization of these streams continue, there is in turn a decrease in the perviousness of the catchment to precipitation, which leads to a decrease in the infiltration and an increase in the surface runoff . This can cause problems during flood discharges. For example, flood discharges in urban catchments were at least 250% higher in urban catchments than in forested catchments in New York and Texas during similar storms. [ 14 ]
Many water managers treat USS by directly addressing the symptoms, most commonly through channel reconfiguration that includes reshaping rock to address altered hydrology and sediment regimes. In spite of having ecological objectives, this approach has been criticized for addressing physical failures in the system without improving ecological conditions. [ 15 ]
|
https://en.wikipedia.org/wiki/Urban_stream
|
An urban wild is a remnant of a natural ecosystem found in the midst of an otherwise highly developed urban area . [ 1 ] [ 2 ]
One of the most expansive efforts to protect and foster urban wilds is the aptly titled "Urban Wilds program" conducted in Boston, which had its start in 1977 off the back of a 1976 report by the Boston Planning & Development Agency (BPDA), formerly the Boston Redevelopment Authority (BRA). [ 3 ] [ 4 ]
Urban wilds, particularly those of several acres or more, are often intact ecological systems that can provide essential ecosystem functions such as the filtering of urban run-off , the storing and slowing the flow of stormwater , amelioration of the warming effect of urban development , and generally benefiting local air quality . [ 1 ] [ 5 ]
Typically, urban wilds are home to native vegetation and animal life as well as some introduced species . [ 6 ] [ 7 ] [ 8 ] Urban wilds are vital to species of migratory birds that have nested in a given area since prior to its urbanization . [ 5 ] [ 7 ] [ 9 ]
Without formal protection, urban wilds are vulnerable to development. However, achieving formal protection of a large urban wild can be difficult. Land tenure of a single ecological area can be complex, with multiple public and private entities owning adjacent properties. [ 10 ] [ 11 ]
Key strategies used in the preservation of urban wilds have included conservation restrictions that keep complex land tenure systems in place while protecting the entire landscape . Public/private partnerships have also been successful in protecting urban wilds. [ 10 ]
The urban wilds prioritized by municipalities tend to be partial wetlands that perform a range of ecological services while contributing to the biological diversity of the region. [ 12 ]
There is some discussion about whether natural areas that are not at an appropriate scale to perform significant ecosystem services should instead be categorized as passive parks as opposed to urban wilds. Smaller urban wilds are used for passive recreation and have less value to the city in terms of enhancing ecosystem function. [ 13 ]
|
https://en.wikipedia.org/wiki/Urban_wild
|
Xenacoelomorpha
Spiralia
Ecdysozoa
Deuterostomes
The urbilaterian (from German ur- 'original') is the hypothetical last common ancestor of the bilaterian clade , i.e., all animals having a bilateral symmetry .
Its appearance is a matter of debate, for no representative has been (or may or may not ever be) identified in the fossil record . Two reconstructed urbilaterian morphologies can be considered: first, the less complex ancestral form forming the common ancestor to Xenacoelomorpha and Nephrozoa ; and second, the more complex ( coelomate ) urbilaterian ancestral to both protostomes and deuterostomes , sometimes referred to as the "urnephrozoan". Since most protostomes and deuterostomes share features — e.g. nephridia (and the derived kidneys ), through guts , blood vessels and nerve ganglia — that are useful only in relatively large ( macroscopic ) organisms, their common ancestor ought also to have been macroscopic. However, such large animals should have left traces in the sediment in which they moved, and evidence of such traces first appear relatively late in the fossil record — long after the urbilaterian would have lived. This leads to suggestions of a small urbilaterian (around 1 mm) which is the supposed state of the ancestor of protostomes, deuterostomes and acoelomorphs .
The first evidence of bilateria in the fossil record comes from trace fossils in sediments towards the end of the Ediacaran period (about 570 million years ago ), and the first fully accepted fossil of a bilaterian organism is Kimberella , dating to 555 million years ago . [ 1 ] There are earlier, controversial fossils: Vernanimalcula has been interpreted as a bilaterian, but may simply represent a fortuitously infilled bubble. [ 2 ] Fossil embryos are known from around the time of Vernanimalcula ( 580 million years ago ), but none of these have bilaterian affinities. [ 3 ] This may reflect a genuine absence of bilateria, however it is likely this is the case as bilateria may not have laid their eggs in sediment, where they would be likely to fossilise. [ 4 ]
Molecular techniques can generate expected dates of the divergence between the bilaterian clades, and thus an assessment of when the urbilaterian lived. These dates have huge margins of error, though they are becoming more accurate with time. More recent estimates are compatible with an Ediacaran bilaterian, although it is possible, especially if early bilaterians were small, that the bilateria had a long cryptic history before they left any evidence in the fossil record. [ 5 ]
Light detection (photosensitivity) is present in organisms as simple as seaweeds ; the definition of a true eye varies, but in general eyes must have directional sensitivity, and thus have screening pigments so only light from the target direction is detected. Thus defined, they need not consist of more than one photoreceptor cell. [ 6 ]
The presence of genetic machinery (the Pax6 and Six genes) common to eye formation in all bilaterians suggests that this machinery - and hence eyes - was present in the urbilaterian. [ 6 ] The most likely candidate eye type is the simple pigment-cup eye , which is the most widespread among the bilateria. [ 6 ]
Since two types of opsin , the c-type and r-type, are found in all bilaterians, the urbilaterian must have possessed both types - although they may not have been found in a centralised eye, but used to synchronise the body clock to daily or lunar variations in lighting. [ 7 ]
Proponents of a complex urbilaterian point to the shared features and genetic machinery common to all bilateria. They argue that (1) since these are similar in so many respects, they could have evolved only once; and (2) since they are common to all bilateria, they must have been present in the ancestral bilaterian animal.
However, as biologists' understanding of the major bilaterian lineages increases, it is beginning to appear that some of these features may have evolved independently in each lineage. Further, the bilaterian clade has recently been expanded to include the acoelomorphs — a group of relatively simple flatworms. This lineage lacks key bilaterian features, and if it truly does reside within the bilaterian "family", many of the features listed above are no longer common to all bilateria. [ 8 ] Instead, some features — such as segmentation and possession of a heart — are restricted to a sub-set of the bilateria, the deuterostomes and protostomes. Their last common ancestor would still have to be large and complex, but the bilaterian ancestor could be much simpler. [ 8 ] However, some scientists stop short of including the acoelomorph clade in the bilateria. This shifts the position of the cladistic node which is being discussed; consequently the urbilaterian in this context is farther out the evolutionary tree and is more derived than the common ancestor of deuterostomes, protostomes and acoelomorphs. [ 9 ]
Genetic reconstructions are unfortunately not much help. They work by considering the genes common to all bilateria, but problems arise because very similar genes can be co-opted for different functions. For instance, the gene Pax6 has a function in eye development, but is absent in some animals with eyes; some cnidaria have genes which in bilateria control the development of a layer of cells that the cnidaria do not have. This means that even if a gene can be identified as present in the urbilaterian, we cannot necessary tell what the gene's function was. [ 8 ] Before this was realised, genetic reconstructions implied an implausibly complex urbilaterian. [ 5 ]
The evolutionary developmental biologist Lewis Held notes that both centipedes and snakes use the oscillating mechanism based on the Notch signaling pathway to produce segments from the growing tip at the rear of the embryo. Further, both groups make use of "the obtuse process of 'resegmentation', whereby the phase of their metameres shifts by half a unit of wavelength, i.e. somites splitting to make vertebrae or parasegments splitting to form segments." [ 10 ] Held comments that all this makes it difficult to imagine that their urbilaterian common ancestor was not segmented. [ 10 ]
The absence of a fossil record gives a starting point for the reconstruction — the urbilaterian must have been small enough not to leave any traces as it moved over or lived in the sediment surface. This means it must have been well below a centimetre in length. As all Cambrian animals are marine, one can reasonably assume that the urbilaterian was too. [ 8 ]
Furthermore, a reconstruction of the urbilateria must rest on identifying morphological similarities between all bilateria. While some bilateria live attached to a substrate , this appears to be a secondary adaptation, and the urbilaterian was probably mobile. [ 8 ] Its nervous system was probably dispersed, but with a small central "brain". Since acoelomorphs lack a heart, coelom or organs, the urbilaterian probably did too — it would presumably have been small enough for diffusion to do the job of transporting compounds through the body. [ 8 ] A small, narrow gut was probably present, which would have had only one opening — a combined mouth and anus. [ 8 ] Functional considerations suggest that the surface of the bilaterian was probably covered with cilia , which it could have used for locomotion or feeding. [ 8 ]
As of 2018 [update] there is still no consensus on whether the characteristics of the deuterostomes and protostomes evolved once or many times. Features such as a heart and a blood-circulation system may therefore not have been present even in the deuterostome-protostome ancestor, which would mean that this too could have been small (hence explaining the lack of fossil record). [ 5 ]
It is possible that the common ancestor of all bilaterals looked similar to:
The proposal that bilaterals arose from the fusion between pennatulacean-like cnidarian zooids was granted by Dewel, implies that the body plans of bilaterals originated from a colonial ancestor. [ 12 ]
This proposal has little or no support in the existing data, and has been commonly used as a justification against the sedentary/semi-sedentary models of urbilaterians as a whole. [ 13 ] [ 14 ]
The recent model by Alexander V. Martynov and Tatiana A. Korshunova revives the idea of a sessile sedentary biphasic ancestor. [ 14 ]
Consider that the urbilaterian is an organism whose adult life is sessile sedentary with a juvenile or free and pelagic larval phase. This hypothesis is a derivative of Nielsen's larval hypothesis, but now also considering the homology of the adult forms of choanozoans (except Ctenophora [ 15 ] ). It also considers various phylogenetic, paleontological and molecular data, relates the adult and ancestral form of anthozoans (from which jellyfish , [ 16 ] placozoans , nephrozoans , [ 17 ] and perhaps proarticulate [ 18 ] are derived), in turn derived from an ancestral organization shared between choanoflagellates , sponges and parahoxozoans .
The current strong bias towards a mobile urbilaterian is considered to cause problems with palaeontological and morphological data in relation to groups within and outside Bilateria.
So members of Proarticulata are an evolutionary dead end rather [ 14 ] than the ancestors of nephrozoans. It is possible that the Cloudinids ( Cloudina , [ 19 ] [ 20 ] Conotubus [ 21 ] and Multiconotubus [ 22 ] ) are basal (and therefore bilateral) nephrozoans, because they have considerable similarity with the tubariums of sedentary pterobranchs , as well as with the shells of semi-mobile hyoliths and mobile mollusks , this taking into account the ontogeny of the cloudinids. [ 14 ] [ 20 ]
This implies that the Cloudinomorpha is not a polyphyletic group as would have been proposed [ 23 ] but rather is a paraphyletic grade from which several taxa derive that may or may not conserve the ancestral clonality of basal metazoans, but instead of cloudinids having an annelid-type gut, it would instead be a U-shaped digestive tube, in fact the relationship between Cloudina and annelids is denied.
The hypothesis of annelid-like ancestor is rejected, due to the independent evolution of segmentation and complete metamerism of several groups of bilaterians ( annelids , panarthropods , chordates and proarticulates ); On the other hand, the urbilaterian would be an animal with a U-shaped gut, with deuterostomic characteristics that hemichordates and lophophorates among other groups conserve, a stolon that holds the organism inside a tube secreted from the embryonic form as a dome or protoconch , a semi-metamerism derived from the formation of mesoderm from the gastrovascular cavity of an anthozoan-like animal. [ 17 ]
This form of urbilaterian: [ 14 ]
The common ancestor of modern bilaterals would then be more similar to modern pterobranchs, although they would not be completely identical to them.
The location of Ctenophora ( Myriazoa hypothesis ) [ 15 ] should not change the hypothesis since it has been left aside taking only into account the molecular and morphological development of Choanoflagellatea , Porifera and Cnidaria.
|
https://en.wikipedia.org/wiki/Urbilaterian
|
Urbit is a decentralized personal server platform [ 4 ] based on functional programming [ 5 ] in a peer-to-peer network . [ 6 ] The Urbit platform was created by neoreactionary political blogger Curtis Yarvin . [ 5 ] The first code release was in 2010. [ 7 ] The Urbit network was launched in 2013. [ 2 ] The first user version (called OS1) was launched in April 2020.
In 2022, the main software in an Urbit installation was a "bare-bones" text-based message board. [ 8 ]
The Point described Urbit OS1 as a "bare-bones messaging server" and compared it to 1990s era Usenet . [ 8 ]
Tlon, the company founded by Yarvin to build Urbit, named after the short story " Tlön, Uqbar, Orbis Tertius " by Jorge Luis Borges , [ 8 ] has received seed funding from various investors since its inception, most notably Peter Thiel , whose Founders Fund , with venture capital firm Andreessen Horowitz invested $1.1 million. [ 9 ] The Urbit community talks up its association with and funding from Thiel, who has also backed Urbit public events. [ 10 ] [ 8 ]
The Point estimated Urbit's active user base as of September 2022 at "a few thousand". [ 8 ]
The Urbit software stack consists of a set of programming languages ("Hoon", a high-level functional programming language, and "Nock", its low-level compiled language); a single-function operating system built on those languages ("Arvo"); a runtime implementation of that operating system ("Vere"), public key infrastructure, built on the Ethereum blockchain ("Azimuth"), for each Urbit instance to participate in a decentralized network; and the decentralized network itself, an encrypted, peer-to-peer protocol . [ 11 ] [ non-primary source needed ]
The 128-bit Urbit identity space consists of 256 "galaxies", 65,280 "stars" (255 for each galaxy), 4,294,901,760 "planets" (65,535 for each star), and comets under those. [ 10 ]
Yarvin called Urbit "functional programming from scratch" in 2010. [ 5 ] The Register described Urbit as having "reinvented some very Lisp -like technology". [ 12 ] Reason described Urbit as "complicated for even the most seasoned of functional programmers". [ 13 ]
In 2015, Yarvin's invitation to discuss Urbit at the Strange Loop programming conference was rescinded; the conference organizer said Yarvin's "mere inclusion and/or presence would overshadow the content of his talk". [ 14 ]
In 2016, after Yarvin was invited to the functional programming conference LambdaConf to discuss Urbit, five speakers and three sponsors withdrew their participation. Their stated reasons were Yarvin's claim that white people are genetically endowed with higher IQs than black people and his support of slavery. [ 15 ]
The source code and design sketches for the project alluded to some of Yarvin's views, including initially classifying users as "lords", "dukes", and "earls". Yarvin described this structure of Urbit in 2010 as "digital feudalism ". [ 8 ] [ 16 ]
In a 2019 blog post, Yarvin said Urbit "is not designed as a political structure". [ 17 ] Josh Lehman, Executive Director of the Urbit Foundation, denied in 2022 that Urbit was "digital feudalism". [ 10 ]
Andrea O'Sullivan of libertarian magazine Reason described Urbit in 2016 as having a "libertarian vision". [ 13 ]
Yarvin departed Tlon in 2019. Lehman said that the "hardest part" of his work at Tlon had been to distance Urbit from Yarvin. [ 10 ] Yarvin returned to Urbit in 2024. [ 18 ]
In April 2024, the Urbit Foundation board fired Lehman, and Yarvin returned to a leadership role at Urbit with no formal title. [ citation needed ]
|
https://en.wikipedia.org/wiki/Urbit
|
In astroparticle physics , an Urca process is a reaction which emits a neutrino and which is assumed to take part in cooling processes in neutron stars and white dwarfs . The process was first discussed by George Gamow and Mário Schenberg while they were visiting a casino named Cassino da Urca in Urca , Rio de Janeiro . As Gamow recounts in his autobiography, the name was chosen in part to commemorate the gambling establishment where the two physicists had first met, and "partially because the Urca Process results in a rapid disappearance of thermal energy from the interior of a star, similar to the rapid disappearance of money from the pockets of the gamblers on the Casino de Urca." [ 1 ] In Gamow's South Russian dialect, urca ( Russian : урка ) can also mean a robber or gangster. [ 2 ] [ 3 ]
The direct Urca processes are the simplest neutrino-emitting processes and are thought to be central in the cooling of neutron stars. They have the general form
where B 1 and B 2 are baryons , ℓ is a lepton , and ν l (and ν l ) are (anti-) neutrinos . The baryons can be nucleons (free or bound), hyperons like Λ , Σ and Ξ , or members of the Δ isobar . The lepton is either an electron or a muon .
The Urca process is especially important in the cooling of white dwarfs, where a lepton (usually an electron) is absorbed by the nucleus of an ion and then convectively carried away from the core of a star. Then, a beta decay occurs. Convection then carries the element back into the interior of the star, and the cycle repeats many times. Because the neutrinos emitted during this process are unlikely to be reabsorbed, this is effectively a cooling mechanism for white dwarfs. [ 4 ]
The process can also be essential in the cooling of neutron stars. If a neutron star contains a central core in which the direct Urca-process is operative, the cooling timescale shortens by many orders of magnitude. [ 5 ]
|
https://en.wikipedia.org/wiki/Urca_process
|
50 g/L ethanol ~4 g/L acetonitrile [ 4 ]
Urea , also called carbamide (because it is a diamide of carbonic acid ), is an organic compound with chemical formula CO(NH 2 ) 2 . This amide has two amino groups (– NH 2 ) joined by a carbonyl functional group (–C(=O)–). It is thus the simplest amide of carbamic acid . [ 6 ]
Urea serves an important role in the cellular metabolism of nitrogen -containing compounds by animals and is the main nitrogen-containing substance in the urine of mammals . Urea is Neo-Latin , from French urée , from Ancient Greek οὖρον ( oûron ) ' urine ' , itself from Proto-Indo-European *h₂worsom .
It is a colorless, odorless solid, highly soluble in water, and practically non-toxic ( LD 50 is 15 g/kg for rats). [ 7 ] Dissolved in water, it is neither acidic nor alkaline . The body uses it in many processes, most notably nitrogen excretion . The liver forms it by combining two ammonia molecules ( NH 3 ) with a carbon dioxide ( CO 2 ) molecule in the urea cycle . Urea is widely used in fertilizers as a source of nitrogen (N) and is an important raw material for the chemical industry .
In 1828, Friedrich Wöhler discovered that urea can be produced from inorganic starting materials, which was an important conceptual milestone in chemistry. This showed for the first time that a substance previously known only as a byproduct of life could be synthesized in the laboratory without biological starting materials, thereby contradicting the widely held doctrine of vitalism , which stated that only living organisms could produce the chemicals of life.
The structure of the molecule of urea is O=C(−NH 2 ) 2 . The urea molecule is planar when in a solid crystal because of sp 2 hybridization of the N orbitals. [ 8 ] [ 9 ] It is non-planar with C 2 symmetry when in the gas phase [ 10 ] or in aqueous solution, [ 9 ] with C–N–H and H–N–H bond angles that are intermediate between the trigonal planar angle of 120° and the tetrahedral angle of 109.5°. In solid urea, the oxygen center is engaged in two N–H–O hydrogen bonds . The resulting hydrogen-bond network is probably established at the cost of efficient molecular packing: The structure is quite open, the ribbons forming tunnels with square cross-section. The carbon in urea is described as sp 2 hybridized, the C-N bonds have significant double bond character, and the carbonyl oxygen is relatively basic. Urea's high aqueous solubility reflects its ability to engage in extensive hydrogen bonding with water.
By virtue of its tendency to form porous frameworks, urea has the ability to trap many organic compounds. In these so-called clathrates , the organic "guest" molecules are held in channels formed by interpenetrating helices composed of hydrogen-bonded urea molecules. In this way, urea-clathrates have been well investigated for separations. [ 11 ]
Urea is a weak base, with a p K b of 13.9. [ 5 ] When combined with strong acids, it undergoes protonation at oxygen to form uronium salts. [ 13 ] [ 14 ] It is also a Lewis base , forming metal complexes of the type [M(urea) 6 ] n + . [ 15 ]
Urea reacts with malonic esters to make barbituric acids .
Molten urea decomposes into ammonium cyanate at about 152 °C, and into ammonia and isocyanic acid above 160 °C: [ 16 ]
Heating above 160 °C yields biuret NH 2 CONHCONH 2 and triuret NH 2 CONHCONHCONH 2 via reaction with isocyanic acid: [ 17 ] [ 16 ]
At higher temperatures it converts to a range of condensation products , including cyanuric acid (CNOH) 3 , guanidine HNC(NH 2 ) 2 , and melamine . [ 17 ] [ 16 ]
In aqueous solution, urea slowly equilibrates with ammonium cyanate. This elimination reaction [ 18 ] cogenerates isocyanic acid , which can carbamylate proteins, in particular the N-terminal amino group, the side chain amino of lysine , and to a lesser extent the side chains of arginine and cysteine . [ 19 ] [ 20 ] Each carbamylation event adds 43 daltons to the mass of the protein, which can be observed in protein mass spectrometry . [ 20 ] For this reason, pure urea solutions should be freshly prepared and used, as aged solutions may develop a significant concentration of cyanate (20 mM in 8 M urea). [ 20 ] Dissolving urea in ultrapure water followed by removing ions (i.e. cyanate) with a mixed-bed ion-exchange resin and storing that solution at 4 °C is a recommended preparation procedure. [ 21 ] However, cyanate will build back up to significant levels within a few days. [ 20 ] Alternatively, adding 25–50 mM ammonium chloride to a concentrated urea solution decreases formation of cyanate because of the common ion effect . [ 20 ] [ 22 ]
Urea is readily quantified by a number of different methods, such as the diacetyl monoxime colorimetric method, and the Berthelot reaction (after initial conversion of urea to ammonia via urease). These methods are amenable to high throughput instrumentation, such as automated flow injection analyzers [ 23 ] and 96-well micro-plate spectrophotometers. [ 24 ]
Ureas describes a class of chemical compounds that share the same functional group, a carbonyl group attached to two organic amine residues: R 1 R 2 N−C(=O)−NR 3 R 4 , where R 1 , R 2 , R 3 and R 4 groups are hydrogen (–H), organyl or other groups. Examples include carbamide peroxide , allantoin , and hydantoin . Ureas are closely related to biurets and related in structure to amides , carbamates , carbodiimides , and thiocarbamides .
More than 90% of world industrial production of urea is destined for use as a nitrogen-release fertilizer . [ 17 ] Urea has the highest nitrogen content of all solid nitrogenous fertilizers in common use. Therefore, it has a low transportation cost per unit of nitrogen nutrient . The most common impurity of synthetic urea is biuret , which impairs plant growth. Urea breaks down in the soil to give ammonium ions ( NH + 4 ). The ammonium is taken up by the plant through its roots. In some soils, the ammonium is oxidized by bacteria to give nitrate ( NO − 3 ), which is also a nitrogen-rich plant nutrient. The loss of nitrogenous compounds to the atmosphere and runoff is wasteful and environmentally damaging so urea is sometimes modified to enhance the efficiency of its agricultural use. Techniques to make controlled-release fertilizers that slow the release of nitrogen include the encapsulation of urea in an inert sealant, and conversion of urea into derivatives such as urea-formaldehyde compounds, which degrade into ammonia at a pace matching plants' nutritional requirements.
Urea is a raw material for the manufacture of formaldehyde based resins , such as UF, MUF, and MUPF, used mainly in wood-based panels, for instance, particleboard , fiberboard , OSB, and plywood . [ 25 ]
Urea can be used in a reaction with nitric acid to make urea nitrate , a high explosive that is used industrially and as part of some improvised explosive devices .
Urea is used in Selective Non-Catalytic Reduction (SNCR) and Selective Catalytic Reduction (SCR) reactions to reduce the NO x pollutants in exhaust gases from combustion from diesel , dual fuel, and lean-burn natural gas engines. The BlueTec system, for example, injects a water-based urea solution into the exhaust system. Ammonia ( NH 3 ) produced by the hydrolysis of urea reacts with nitrogen oxides ( NO x ) and is converted into nitrogen gas ( N 2 ) and water within the catalytic converter. The conversion of noxious NO x to innocuous N 2 is described by the following simplified global equation: [ 26 ]
When urea is used, a pre-reaction (hydrolysis) occurs to first convert it to ammonia:
Being a solid highly soluble in water (545 g/L at 25 °C), [ 2 ] urea is much easier and safer to handle and store than the more irritant , caustic and hazardous ammonia ( NH 3 ), so it is the reactant of choice. Trucks and cars using these catalytic converters need to carry a supply of diesel exhaust fluid , also sold as AdBlue , a solution of urea in water.
Urea in concentrations up to 10 M is a powerful protein denaturant as it disrupts the noncovalent bonds in the proteins. This property can be exploited to increase the solubility of some proteins. A mixture of urea and choline chloride is used as a deep eutectic solvent (DES), a substance similar to ionic liquid . When used in a deep eutectic solvent, urea gradually denatures the proteins that are solubilized. [ 27 ]
Urea in concentrations up to 8 M can be used to make fixed brain tissue transparent to visible light while still preserving fluorescent signals from labeled cells. This allows for much deeper imaging of neuronal processes than previously obtainable using conventional one photon or two photon confocal microscopes. [ 28 ]
Urea-containing creams are used as topical dermatological products to promote rehydration of the skin . Urea 40% is indicated for psoriasis , xerosis , onychomycosis , ichthyosis , eczema , keratosis , keratoderma , corns, and calluses . If covered by an occlusive dressing , 40% urea preparations may also be used for nonsurgical debridement of nails . Urea 40% "dissolves the intercellular matrix" [ 29 ] [ 30 ] of the nail plate. Only diseased or dystrophic nails are removed, as there is no effect on healthy portions of the nail. [ 31 ] This drug (as carbamide peroxide ) is also used as an earwax removal aid. [ 32 ]
Urea has also been studied as a diuretic . It was first used by Dr. W. Friedrich in 1892. [ 33 ] In a 2010 study of ICU patients, urea was used to treat euvolemic hyponatremia and was found safe, inexpensive, and simple. [ 34 ]
Like saline , urea has been injected into the uterus to induce abortion , although this method is no longer in widespread use. [ 35 ]
The blood urea nitrogen (BUN) test is a measure of the amount of nitrogen in the blood that comes from urea. It is used as a marker of renal function , though it is inferior to other markers such as creatinine because blood urea levels are influenced by other factors such as diet, dehydration, [ 36 ] and liver function.
Urea has also been studied as an excipient in drug-coated balloon (DCB) coating formulations to enhance local drug delivery to stenotic blood vessels. [ 37 ] [ 38 ] Urea, when used as an excipient in small doses (~3 μg/mm 2 ) to coat DCB surface was found to form crystals that increase drug transfer without adverse toxic effects on vascular endothelial cells . [ 39 ]
Urea labeled with carbon-14 or carbon-13 is used in the urea breath test , which is used to detect the presence of the bacterium Helicobacter pylori ( H. pylori ) in the stomach and duodenum of humans, associated with peptic ulcers . The test detects the characteristic enzyme urease , produced by H. pylori , by a reaction that produces ammonia from urea. This increases the pH (reduces the acidity) of the stomach environment around the bacteria. Similar bacteria species to H. pylori can be identified by the same test in animals such as apes , dogs , and cats (including big cats ).
Amino acids from ingested food (or produced from catabolism of muscle protein) that are used for the synthesis of proteins and other biological substances can be oxidized by the body as an alternative source of energy, yielding urea and carbon dioxide . [ 47 ] The oxidation pathway starts with the removal of the amino group by a transaminase ; the amino group is then fed into the urea cycle . The first step in the conversion of amino acids into metabolic waste in the liver is removal of the alpha-amino nitrogen, which produces ammonia . Because ammonia is toxic, it is excreted immediately by fish, converted into uric acid by birds, and converted into urea by mammals. [ 48 ]
Ammonia ( NH 3 ) is a common byproduct of the metabolism of nitrogenous compounds. Ammonia is smaller, more volatile, and more mobile than urea. If allowed to accumulate, ammonia would raise the pH in cells to toxic levels. Therefore, many organisms convert ammonia to urea, even though this synthesis has a net energy cost. Being practically neutral and highly soluble in water, urea is a safe vehicle for the body to transport and excrete excess nitrogen.
Urea is synthesized in the body of many organisms as part of the urea cycle , either from the oxidation of amino acids or from ammonia . In this cycle, amino groups donated by ammonia and L - aspartate are converted to urea, while L - ornithine , citrulline , L - argininosuccinate , and L - arginine act as intermediates. Urea production occurs in the liver and is regulated by N -acetylglutamate . Urea is then dissolved into the blood (in the reference range of 2.5 to 6.7 mmol/L) and further transported and excreted by the kidney as a component of urine . In addition, a small amount of urea is excreted (along with sodium chloride and water) in sweat .
In water, the amine groups undergo slow displacement by water molecules, producing ammonia, ammonium ions , and bicarbonate ions . For this reason, old, stale urine has a stronger odor than fresh urine.
The cycling of and excretion of urea by the kidneys is a vital part of mammalian metabolism. Besides its role as carrier of waste nitrogen, urea also plays a role in the countercurrent exchange system of the nephrons , that allows for reabsorption of water and critical ions from the excreted urine . Urea is reabsorbed in the inner medullary collecting ducts of the nephrons, [ 49 ] thus raising the osmolarity in the medullary interstitium surrounding the thin descending limb of the loop of Henle , which makes the water reabsorb.
By action of the urea transporter 2 , some of this reabsorbed urea eventually flows back into the thin descending limb of the tubule, [ 50 ] through the collecting ducts, and into the excreted urine. The body uses this mechanism, which is controlled by the antidiuretic hormone , to create hyperosmotic urine — i.e., urine with a higher concentration of dissolved substances than the blood plasma . This mechanism is important to prevent the loss of water, maintain blood pressure , and maintain a suitable concentration of sodium ions in the blood plasma.
The equivalent nitrogen content (in grams ) of urea (in mmol ) can be estimated by the conversion factor 0.028 g/mmol. [ 51 ] Furthermore, 1 gram of nitrogen is roughly equivalent to 6.25 grams of protein , and 1 gram of protein is roughly equivalent to 5 grams of muscle tissue. In situations such as muscle wasting , 1 mmol of excessive urea in the urine (as measured by urine volume in litres multiplied by urea concentration in mmol/L) roughly corresponds to a muscle loss of 0.67 gram.
In aquatic organisms the most common form of nitrogen waste is ammonia, whereas land-dwelling organisms convert the toxic ammonia to either urea or uric acid . Urea is found in the urine of mammals and amphibians , as well as some fish. Birds and saurian reptiles have a different form of nitrogen metabolism that requires less water, and leads to nitrogen excretion in the form of uric acid. Tadpoles excrete ammonia, but shift to urea production during metamorphosis . Despite the generalization above, the urea pathway has been documented not only in mammals and amphibians, but in many other organisms as well, including birds, invertebrates , insects, plants, yeast , fungi , and even microorganisms . [ 52 ]
Urea can be irritating to skin, eyes, and the respiratory tract. Repeated or prolonged contact with urea in fertilizer form on the skin may cause dermatitis . [ 53 ]
High concentrations in the blood can be damaging. Ingestion of low concentrations of urea, such as are found in typical human urine , are not dangerous with additional water ingestion within a reasonable time-frame. Many animals (e.g. camels , rodents or dogs) have a much more concentrated urine which may contain a higher urea amount than normal human urine.
Urea can cause algal blooms to produce toxins, and its presence in the runoff from fertilized land may play a role in the increase of toxic blooms. [ 54 ]
The substance decomposes on heating above melting point, producing toxic gases, and reacts violently with strong oxidants, nitrites, inorganic chlorides, chlorites and perchlorates, causing fire and explosion. [ 55 ]
Urea was first discovered in urine in 1727 by the Dutch scientist Herman Boerhaave , [ 56 ] although this discovery is often attributed to the French chemist Hilaire Rouelle as well as William Cruickshank . [ 57 ]
Boerhaave used the following steps to isolate urea: [ 58 ] [ 59 ]
In 1828, the German chemist Friedrich Wöhler obtained urea artificially by treating silver cyanate with ammonium chloride . [ 60 ] [ 61 ] [ 62 ]
This was the first time an organic compound was artificially synthesized from inorganic starting materials, without the involvement of living organisms. The results of this experiment implicitly discredited vitalism , the theory that the chemicals of living organisms are fundamentally different from those of inanimate matter. This insight was important for the development of organic chemistry . His discovery prompted Wöhler to write triumphantly to Jöns Jakob Berzelius :
I must tell you that I can make urea without the use of kidneys, either man or dog. Ammonium cyanate is urea.
In fact, his second sentence was incorrect. Ammonium cyanate [NH 4 ] + [OCN] − and urea CO(NH 2 ) 2 are two different chemicals with the same empirical formula CON 2 H 4 , which are in chemical equilibrium heavily favoring urea under standard conditions . [ 63 ] Regardless, with his discovery, Wöhler secured a place among the pioneers of organic chemistry.
Uremic frost was first described in 1865 by Harald Hirschsprung , the first Danish pediatrician in 1870 who also described the disease that carries his name in 1886. Uremic frost has become rare since the advent of dialysis . It is the classical pre-dialysis era description of crystallized urea deposits over the skin of patients with prolonged kidney failure and severe uremia. [ 64 ]
Urea was first noticed by Herman Boerhaave in the early 18th century from evaporates of urine. In 1773, Hilaire Rouelle obtained crystals containing urea from human urine by evaporating it and treating it with alcohol in successive filtrations. [ 65 ] This method was aided by Carl Wilhelm Scheele 's discovery that urine treated by concentrated nitric acid precipitated crystals. Antoine François, comte de Fourcroy and Louis Nicolas Vauquelin discovered in 1799 that the nitrated crystals were identical to Rouelle's substance and invented the term "urea." [ 66 ] [ 67 ] Berzelius made further improvements to its purification [ 68 ] and finally William Prout , in 1817, succeeded in obtaining and determining the chemical composition of the pure substance. [ 69 ] In the evolved procedure, urea was precipitated as urea nitrate by adding strong nitric acid to urine. To purify the resulting crystals, they were dissolved in boiling water with charcoal and filtered. After cooling, pure crystals of urea nitrate form. To reconstitute the urea from the nitrate, the crystals are dissolved in warm water, and barium carbonate added. The water is then evaporated and anhydrous alcohol added to extract the urea. This solution is drained off and evaporated, leaving pure urea.
Ureas in the more general sense can be accessed in the laboratory by reaction of phosgene with primary or secondary amines :
These reactions proceed through an isocyanate intermediate. Non-symmetric ureas can be accessed by the reaction of primary or secondary amines with an isocyanate.
Urea can also be produced by heating ammonium cyanate to 60 °C.
In 2020, worldwide production capacity was approximately 180 million tonnes. [ 70 ]
For use in industry, urea is produced from synthetic ammonia and carbon dioxide . As large quantities of carbon dioxide are produced during the ammonia manufacturing process as a byproduct of burning hydrocarbons to generate heat (predominantly natural gas, and less often petroleum derivatives or coal), urea production plants are almost always located adjacent to the site where the ammonia is manufactured.
The basic process, patented in 1922, is called the Bosch–Meiser urea process after its discoverers Carl Bosch and Wilhelm Meiser. [ 71 ] The process consists of two main equilibrium reactions , with incomplete conversion of the reactants. The first is carbamate formation : the fast exothermic reaction of liquid ammonia with gaseous carbon dioxide ( CO 2 ) at high temperature and pressure to form ammonium carbamate ( [NH 4 ] + [NH 2 COO] − ): [ 17 ]
The second is urea conversion : the slower endothermic decomposition of ammonium carbamate into urea and water:
The overall conversion of NH 3 and CO 2 to urea is exothermic, with the reaction heat from the first reaction driving the second. The conditions that favor urea formation (high temperature) have an unfavorable effect on the carbamate formation equilibrium. The process conditions are a compromise: the ill-effect on the first reaction of the high temperature (around 190 °C) needed for the second is compensated for by conducting the process under high pressure (140–175 bar), which favors the first reaction. Although it is necessary to compress gaseous carbon dioxide to this pressure, the ammonia is available from the ammonia production plant in liquid form, which can be pumped into the system much more economically. To allow the slow urea formation reaction time to reach equilibrium, a large reaction space is needed, so the synthesis reactor in a large urea plant tends to be a massive pressure vessel.
Because the urea conversion is incomplete, the urea must be separated from the unconverted reactants, including the ammonium carbamate. Various commercial urea processes are characterized by the conditions under which urea forms and the way that unconverted reactants are further processed.
In early "straight-through" urea plants, reactant recovery (the first step in "recycling") was done by letting down the system pressure to atmospheric to let the carbamate decompose back to ammonia and carbon dioxide. Originally, because it was not economic to recompress the ammonia and carbon dioxide for recycle, the ammonia at least would be used for the manufacture of other products such as ammonium nitrate or ammonium sulfate , and the carbon dioxide was usually wasted. Later process schemes made recycling unused ammonia and carbon dioxide practical. This was accomplished by the "total recycle process", developed in the 1940s to 1960s and now called the "conventional recycle process". It proceeds by depressurizing the reaction solution in stages (first to 18–25 bar and then to 2–5 bar) and passing it at each stage through a steam-heated carbamate decomposer , then recombining the resulting carbon dioxide and ammonia in a falling-film carbamate condenser and pumping the carbamate solution back into the urea reaction vessel. [ 17 ]
The "conventional recycle process" for recovering and reusing the reactants has largely been supplanted by a stripping process, developed in the early 1960s by Stamicarbon in The Netherlands, that operates at or near the full pressure of the reaction vessel. It reduces the complexity of the multi-stage recycle scheme, and it reduces the amount of water recycled in the carbamate solution, which has an adverse effect on the equilibrium in the urea conversion reaction and thus on overall plant efficiency. Effectively all new urea plants use the stripper, and many total recycle urea plants have converted to a stripping process. [ 17 ] [ 73 ]
In the conventional recycle processes, carbamate decomposition is promoted by reducing the overall pressure, which reduces the partial pressure of both ammonia and carbon dioxide, allowing these gasses to be separated from the urea product solution. The stripping process achieves a similar effect without lowering the overall pressure, by suppressing the partial pressure of just one of the reactants in order to promote carbamate decomposition. Instead of feeding carbon dioxide gas directly to the urea synthesis reactor with the ammonia, as in the conventional process, the stripping process first routes the carbon dioxide through the stripper. The stripper is a carbamate decomposer that provides a large amount of gas-liquid contact. This flushes out free ammonia, reducing its partial pressure over the liquid surface and carrying it directly to a carbamate condenser (also under full system pressure). From there, reconstituted ammonium carbamate liquor is passed to the urea production reactor. That eliminates the medium-pressure stage of the conventional recycle process. [ 17 ] [ 73 ]
The three main side reactions that produce impurities have in common that they decompose urea.
Urea hydrolyzes back to ammonium carbamate in the hottest stages of the synthesis plant, especially in the stripper, so residence times in these stages are designed to be short. [ 17 ]
Biuret is formed when two molecules of urea combine with the loss of a molecule of ammonia.
Normally this reaction is suppressed in the synthesis reactor by maintaining an excess of ammonia, but after the stripper, it occurs until the temperature is reduced. [ 17 ] Biuret is undesirable in urea fertilizer because it is toxic to crop plants to varying degrees, [ 74 ] but it is sometimes desirable as a nitrogen source when used in animal feed. [ 75 ]
Isocyanic acid HNCO and ammonia NH 3 results from the thermal decomposition of ammonium cyanate [NH 4 ] + [OCN] − , which is in chemical equilibrium with urea:
This decomposition is at its worst when the urea solution is heated at low pressure, which happens when the solution is concentrated for prilling or granulation (see below). The reaction products mostly volatilize into the overhead vapours, and recombine when these condense to form urea again, which contaminates the process condensate. [ 17 ]
Ammonium carbamate solutions are highly corrosive to metallic construction materials – even to resistant forms of stainless steel – especially in the hottest parts of the synthesis plant such as the stripper. Historically corrosion has been minimized (although not eliminated) by continuous injection of a small amount of oxygen (as air) into the plant to establish and maintain a passive oxide layer on exposed stainless steel surfaces. Highly corrosion resistant materials have been introduced to reduce the need for passivation oxygen, such as specialized duplex stainless steels in the 1990s, and zirconium or zirconium-clad titanium tubing in the 2000s. [ 17 ]
Urea can be produced in solid forms ( prills , granules , pellets or crystals) or as solutions.
For its main use as a fertilizer urea is mostly marketed in solid form, either as prills or granules. Prills are solidified droplets, whose production predates satisfactory urea granulation processes. Prills can be produced more cheaply than granules, but the limited size of prills (up to about 2.1 mm in diameter), their low crushing strength, and the caking or crushing of prills during bulk storage and handling make them inferior to granules. Granules are produced by acretion onto urea seed particles by spraying liquid urea in a succession of layers. Formaldehyde is added during the production of both prills and granules in order to increase crushing strength and suppress caking. Other shaping techniques such as pastillization (depositing uniform-sized liquid droplets onto a cooling conveyor belt) are also used. [ 17 ]
Solutions of urea and ammonium nitrate in water (UAN) are commonly used as a liquid fertilizer. In admixture, the combined solubility of ammonium nitrate and urea is so much higher than that of either component alone that it gives a stable solution with a total nitrogen content (32%) approaching that of solid ammonium nitrate (33.5%), though not, of course, that of urea itself (46%). UAN allows use of ammonium nitrate without the explosion hazard. [ 17 ] UAN accounts for 80% of the liquid fertilizers in the US. [ 76 ]
|
https://en.wikipedia.org/wiki/Urea
|
In medicine , the urea-to-creatinine ratio ( UCR [ 1 ] ), known in the United States as BUN-to-creatinine ratio , is the ratio of the blood levels of urea ( BUN ) (mmol/L) and creatinine (Cr) (μmol/L). BUN only reflects the nitrogen content of urea (MW 28) and urea measurement reflects the whole of the molecule (MW 60), urea is just over twice BUN (60/28 = 2.14). In the United States, both quantities are given in mg/dL The ratio may be used to determine the cause of acute kidney injury or dehydration .
The principle behind this ratio is the fact that both urea (BUN) and creatinine are freely filtered by the glomerulus ; however, urea reabsorbed by the renal tubules can be regulated (increased or decreased) whereas creatinine reabsorption remains the same (minimal reabsorption).
Urea and creatinine are nitrogenous end products of metabolism. [ 2 ] Urea is the primary metabolite derived from dietary protein and tissue protein turnover. Creatinine is the product of muscle creatine catabolism. Both are relatively small molecules (60 and 113 daltons, respectively) that distribute throughout total body water. In Europe, the whole urea molecule is assayed, whereas in the United States only the nitrogen component of urea (the blood or serum urea nitrogen, i.e., BUN or SUN) is measured. The BUN, then, is roughly one-half (7/15 or 0.466) of the blood urea. [ citation needed ]
The normal range of urea nitrogen in blood or serum is 5 to 20 mg/dl, or 1.8 to 7.1 mmol urea per liter. The range is wide because of normal variations due to protein intake, endogenous protein catabolism, state of hydration, hepatic urea synthesis, and renal urea excretion. A BUN of 15 mg/dl would represent significantly impaired function for a woman in the thirtieth week of gestation. Her higher glomerular filtration rate (GFR), expanded extracellular fluid volume, and anabolism in the developing fetus contribute to her relatively low BUN of 5 to 7 mg/dl. In contrast, the rugged rancher who eats in excess of 125 g protein each day may have a normal BUN of 20 mg/dl.
The normal serum creatinine (sCr) varies with the subject's body muscle mass and with the technique used to measure it. For the adult male, the normal range is 0.6 to 1.2 mg/dl, or 53 to 106 μmol/L by the kinetic or enzymatic method, and 0.8 to 1.5 mg/dl, or 70 to 133 μmol/L by the older manual Jaffé reaction. For the adult female, with her generally lower muscle mass, the normal range is 0.5 to 1.1 mg/dl, or 44 to 97 μmol/L by the enzymatic method.
Multiple methods for analysis of BUN and creatinine have evolved over the years. Most of those in current use are automated and give clinically reliable and reproducible results.
There are two general methods for the measurement of urea nitrogen. The diacetyl, or Fearon, reaction develops a yellow chromogen with urea, and this is quantified by photometry. It has been modified for use in autoanalyzers and generally gives relatively accurate results. It still has limited specificity, however, as illustrated by spurious elevations with sulfonylurea compounds, and by colorimetric interference from hemoglobin when whole blood is used.
In the more specific enzymatic methods, the enzyme urease converts urea to ammonia and carbonic acid. These products, which are proportional to the concentration of urea in the sample, are assayed in a variety of systems, some of which are automated. One system checks the decrease in absorbance at 340 nm when the ammonia reacts with alpha-ketoglutaric acid. The Astra system measures the rate of increase in conductivity of the solution in which urea is hydrolyzed.
Even though the test is now performed mostly on serum, the term BUN is still retained by convention. The specimen should not be collected in tubes containing sodium fluoride because the fluoride inhibits urease. Also chloral hydrate and guanethidine have been observed to increase BUN values.
The 1886 Jaffé reaction, in which creatinine is treated with an alkaline picrate solution to yield a red complex, is still the basis of most commonly used methods for measuring creatinine. This reaction is nonspecific and subject to interference from many noncreatinine chromogens, including acetone, acetoacetate, pyruvate, ascorbic acid, glucose, cephalosporins, barbiturates, and protein. It is also sensitive to pH and temperature changes. One or another of the many modifications designed to nullify these sources of error is used in most clinical laboratories today. For example, the recent kinetic-rate modification, which isolates the brief time interval during which only true creatinine contributes to total color formation, is the basis of the Astra modular system.
More specific, non-Jaffé assays have also been developed. One of these, an automated dry-slide enzymatic method, measures ammonia generated when creatinine is hydrolyzed by creatinine iminohydrolase. Its simplicity, precision, and speed highly recommend it for routine use in the clinical laboratory. Only 5-fluorocytosine interferes significantly with the test.
Creatinine must be determined in plasma or serum and not whole blood because erythrocytes contain considerable amounts of noncreatinine chromogens. To minimize the conversion of creatine to creatinine, specimens must be as fresh as possible and maintained at pH 7 during storage.
The amount of urea produced varies with substrate delivery to the liver and the adequacy of liver function. It is increased by a high-protein diet, by gastrointestinal bleeding (based on plasma protein level of 7.5 g/dl and a hemoglobin of 15 g/dl, 500 ml of whole blood is equivalent to 100 g protein), by catabolic processes such as fever or infection, and by antianabolic drugs such as tetracyclines (except doxycycline) or glucocorticoids. It is decreased by low-protein diet, malnutrition or starvation, and by impaired metabolic activity in the liver due to parenchymal liver disease or, rarely, to congenital deficiency of urea cycle enzymes. The normal subject on a 70 g protein diet produces about 12 g of urea each day.
This newly synthesized urea distributes throughout total body water. Some of it is recycled through the enterohepatic circulation. Usually, a small amount (less than 0.5 g/day) is lost through the gastrointestinal tract, lungs, and skin; during exercise, a substantial fraction may be excreted in sweat. The bulk of the urea, about 10 g each day, is excreted by the kidney in a process that begins with glomerular filtration. At high urine flow rates (greater than 2 ml/min), 40% of the filtered load is reabsorbed, and at flow rates lower than 2 ml/min, reabsorption may increase to 60%. Low flow, as in urinary tract obstruction, allows more time for reabsorption and is often associated with increases in antidiuretic hormone (ADH), which increases the permeability of the terminal collecting tubule to urea. During ADH-induced antidiuresis, urea secretion contributes to the intratubular concentration of urea. The subsequent buildup of urea in the inner medulla is critical to the process of urinary concentration. Reabsorption is also increased by volume contraction, reduced renal plasma flow as in congestive heart failure, and decreased glomerular filtration.
Creatinine formation begins with the transamidination from arginine to glycine to form glycocyamine or guanidoacetic acid (GAA). This reaction occurs primarily in the kidneys, but also in the mucosa of the small intestine and the pancreas. The GAA is transported to the liver where it is methylated by S-adenosyl methionine (SAM) to form creatine. Creatine enters the circulation, and 90% of it is taken up and stored by muscle tissue. [ 2 ]
Normal serum values
Serum ratios
The reference interval for normal BUN/creatinine serum ratio is 12 : 1 to 20 : 1. [ 4 ]
An elevated BUN:Cr due to a low or low-normal creatinine and a BUN within the reference range is unlikely to be of clinical significance.
The ratio is predictive of prerenal injury when BUN:Cr exceeds 20 [ 5 ] or when urea:Cr exceeds 100. [ 6 ] In prerenal injury, urea increases disproportionately to creatinine due to enhanced proximal tubular reabsorption that follows the enhanced transport of sodium and water.
The ratio is useful for the diagnosis of bleeding from the gastrointestinal (GI) tract in patients who do not present with overt vomiting of blood. [ 7 ] In children, a BUN:Cr ratio of 30 or greater has a sensitivity of 68.8% and a specificity of 98% for upper gastrointestinal bleeding. [ 8 ]
A common assumption is that the ratio is elevated because of amino acid digestion, since blood (excluding water) consists largely of the protein hemoglobin and is broken down by digestive enzymes of the upper GI tract into amino acids, which are then reabsorbed in the GI tract and broken down into urea. However, elevated BUN:Cr ratios are not observed when other high protein loads (e.g., steak) are consumed. [ citation needed ] Renal hypoperfusion secondary to the blood lost from the GI bleed has been postulated to explain the elevated BUN:Cr ratio. However, other research has found that renal hypoperfusion cannot fully explain the elevation. [ 9 ]
Because of decreased muscle mass, elderly patients may have an elevated BUN:Cr at baseline. [ 10 ]
Hypercatabolic states, high-dose glucocorticoids, and resorption of large hematomas have all been cited as causes of a disproportionate rise in BUN relative to the creatinine. [ 11 ]
|
https://en.wikipedia.org/wiki/Urea-to-creatinine_ratio
|
The urea cycle (also known as the ornithine cycle ) is a cycle of biochemical reactions that produces urea (NH 2 ) 2 CO from ammonia (NH 3 ). Animals that use this cycle, mainly amphibians and mammals, are called ureotelic .
The urea cycle converts highly toxic ammonia to urea for excretion. [ 1 ] This cycle was the first metabolic cycle to be discovered by Hans Krebs and Kurt Henseleit in 1932, [ 2 ] [ 3 ] [ 4 ] five years before the discovery of the TCA cycle . The urea cycle was described in more detail later on by Ratner and Cohen. The urea cycle takes place primarily in the liver and, to a lesser extent, in the kidneys .
Amino acid catabolism results in waste ammonia. All animals need a way to excrete this product. Most aquatic organisms , or ammonotelic organisms, excrete ammonia without converting it. [ 1 ] Organisms that cannot easily and safely remove nitrogen as ammonia convert it to a less toxic substance, such as urea , via the urea cycle, which occurs mainly in the liver. Urea produced by the liver is then released into the bloodstream , where it travels to the kidneys and is ultimately excreted in urine . The urea cycle is essential to these organisms, because if the nitrogen or ammonia is not eliminated from the organism it can be very detrimental. [ 5 ] In species including birds and most insects , the ammonia is converted into uric acid or its urate salt, which is excreted in solid form . Further, the urea cycle consumes acidic waste carbon dioxide by combining it with the basic ammonia, helping to maintain a neutral pH.
The entire process converts two amino groups, one from NH + 4 and one from aspartate , and a carbon atom from HCO − 3 , to the relatively nontoxic excretion product urea . [ 6 ] This occurs at the cost of four "high-energy" phosphate bonds (3 ATP hydrolyzed to 2 ADP and one AMP ). The conversion from ammonia to urea happens in five main steps. The first is needed for ammonia to enter the cycle and the following four are all a part of the cycle itself. To enter the cycle, ammonia is converted to carbamoyl phosphate . The urea cycle consists of four enzymatic reactions: one mitochondrial and three cytosolic . [ 1 ] [ 7 ] This uses 6 enzymes. [ 6 ] [ 7 ] [ 8 ]
1 L - ornithine 2 carbamoyl phosphate 3 L - citrulline 4 argininosuccinate 5 fumarate 6 L - arginine 7 urea L -Asp L - aspartate CPS-1 carbamoyl phosphate synthetase I OTC Ornithine transcarbamoylase ASS argininosuccinate synthetase ASL argininosuccinate lyase ARG1 arginase 1
Before the urea cycle begins ammonia is converted to carbamoyl phosphate. The reaction is catalyzed by carbamoyl phosphate synthetase I and requires the use of two ATP molecules. [ 1 ] The carbamoyl phosphate then enters the urea cycle.
In the first reaction, NH + 4 + HCO − 3 is equivalent to NH 3 + CO 2 + H 2 O .
Thus, the overall equation of the urea cycle is:
Since fumarate is obtained by removing NH 3 from aspartate (by means of reactions 3 and 4), and PP i + H 2 O → 2 P i , the equation can be simplified as follows:
Note that reactions related to the urea cycle also cause the production of 2 NADH , so the overall reaction releases slightly more energy than it consumes. The NADH is produced in two ways:
We can summarize this by combining the reactions:
The two NADH produced can provide energy for the formation of 5 ATP (cytosolic NADH provides 2.5 ATP with the malate-aspartate shuttle in human liver cell), a net production of two high-energy phosphate bond for the urea cycle. However, if gluconeogenesis is underway in the cytosol, the latter reducing equivalent is used to drive the reversal of the GAPDH step instead of generating ATP.
The fate of oxaloacetate is either to produce aspartate via transamination or to be converted to phosphoenolpyruvate , which is a substrate for gluconeogenesis .
As stated above many vertebrates use the urea cycle to create urea out of ammonium so that the ammonium does not damage the body. Though this is helpful, there are other effects of the urea cycle. For example: consumption of two ATP, production of urea, generation of H + , the combining of HCO − 3 and NH + 4 to forms where it can be regenerated, and finally the consumption of NH + 4 . [ 9 ]
The synthesis of carbamoyl phosphate and the urea cycle are dependent on the presence of N -acetylglutamic acid (NAcGlu), which allosterically activates CPS1 . NAcGlu is an obligate activator of carbamoyl phosphate synthetase. [ 10 ] Synthesis of NAcGlu by N -acetylglutamate synthase (NAGS) is stimulated by both Arg, allosteric stimulator of NAGS, and Glu, a product in the transamination reactions and one of NAGS's substrates, both of which are elevated when free amino acids are elevated. So Glu not only is a substrate for NAGS but also serves as an activator for the urea cycle.
The remaining enzymes of the cycle are controlled by the concentrations of their substrates. Thus, inherited deficiencies in cycle enzymes other than ARG1 do not result in significant decreases in urea production (if any cycle enzyme is entirely missing, death occurs shortly after birth). Rather, the deficient enzyme's substrate builds up, increasing the rate of the deficient reaction to normal.
The anomalous substrate buildup is not without cost, however. The substrate concentrations become elevated all the way back up the cycle to NH + 4 , resulting in hyperammonemia (elevated [ NH + 4 ] P ).
Although the root cause of NH + 4 toxicity is not completely understood, a high [ NH + 4 ] puts an enormous strain on the NH + 4 -clearing system, especially in the brain (symptoms of urea cycle enzyme deficiencies include intellectual disability and lethargy ). This clearing system involves GLUD1 and GLUL , which decrease the 2-oxoglutarate (2OG) and Glu pools. The brain is most sensitive to the depletion of these pools. Depletion of 2OG decreases the rate of TCAC , whereas Glu is both a neurotransmitter and a precursor to GABA , another neurotransmitter. [ 11 ]
The urea cycle and the citric acid cycle are independent cycles but are linked. One of the nitrogen atoms in the urea cycle is obtained from the transamination of oxaloacetate to aspartate. [ 12 ] The fumarate that is produced in step three is also an intermediate in the citric acid cycle and is returned to that cycle. [ 12 ]
Urea cycle disorders are rare and affect about one in 35,000 people in the United States . [ 13 ] Genetic defects in the enzymes involved in the cycle can occur, which usually manifest within a few days after birth. [ 5 ] The recently born child will typically experience varying bouts of vomiting and periods of lethargy . [ 5 ] Ultimately, the infant may go into a coma and develop brain damage . [ 5 ] New-borns with UCD are at a much higher risk of complications or death due to untimely screening tests and misdiagnosed cases. The most common misdiagnosis is neonatal sepsis . Signs of UCD can be present within the first 2 to 3 days of life, but the present method to get confirmation by test results can take too long. [ 14 ] This can potentially cause complications such as coma or death. [ 14 ]
Urea cycle disorders may also be diagnosed in adults, and symptoms may include delirium episodes, lethargy , and symptoms similar to that of a stroke . [ 15 ] On top of these symptoms, if the urea cycle begins to malfunction in the liver , the patient may develop cirrhosis . [ 16 ] This can also lead to sarcopenia (the loss of muscle mass). [ 16 ] Mutations lead to deficiencies of the various enzymes and transporters involved in the urea cycle, and cause urea cycle disorders. [ 1 ] If individuals with a defect in any of the six enzymes used in the cycle ingest amino acids beyond what is necessary for the minimum daily requirements, then the ammonia that is produced will not be able to be converted to urea. These individuals can experience hyperammonemia , or the build-up of a cycle intermediate.
All urea cycle defects, except OTC deficiency, are inherited in an autosomal recessive manner. OTC deficiency is inherited as an X-linked recessive disorder, although some females can show symptoms. Most urea cycle disorders are associated with hyperammonemia , however argininemia and some forms of argininosuccinic aciduria do not present with elevated ammonia.
Carbamoyl phosphate
L - citrulline
L - ornithine
Urea
L - aspartate
L - argininosuccinate
L - arginine
Fumarate
|
https://en.wikipedia.org/wiki/Urea_cycle
|
The urea extraction crystallization is a process for separating linear paraffins (n-paraffins, n-alkanes) from hydrocarbon mixtures through the formation of urea -n-paraffin- clathrates . The process is primarily used to lower the pour point of petroleum products, by-products of the process are n-paraffins in high purity. The method may also applied for the separation of fatty acids and fatty alcohols . In addition to urea also thiourea is used in the process.
In 1939 German chemist Friedrich Bergen was trying different extractants to separate serum proteins from milk at low temperature. When he tried urea, he noticed that something weird was going on with milk lipids. A treatment with octanol serendipitously revealed that it combines with urea in large crystals. Bergen investigated different lipids, alkanes and alcohols and found out that at least six carbon atoms are required, and that branched hydrocarbons don't participate in the phenomenon. [ 2 ]
Not being an expert in hydrocarbons and urea, he cooperated with Matthias Pier [ de ] from BASF / IG Farben and then with Wilhelm Schlenk , filing for patents [ 3 ] [ 4 ] [ 5 ] with the latter in 1940, which were awarded in 1953. They didn't publish their findings until 1949 [ 6 ] because German authorities classified the discovery during the World War II , [ 7 ] but the patent applications were confiscated by Allies' Technical Oil Mission after the war [ 8 ] so Sonneborn was able to put a pilot oil dewaxing plant in Petrolia, Pennsylvania into operation already in 1950. [ 7 ] [ 9 ] DEA AG followed the suit in 1954 and Standard Oil in 1956, [ 7 ] and worldwide research in the topic took off in the 1950s. [ 10 ]
In addition to the n-alkanes are also unbranched fatty acids with more than four carbon atoms, their esters and unbranched fatty alcohols can migrate into the channels of the crystallized urea and form a clathrate .
A deviation from the linear molecular geometry, for example, by C=C-double bonds in the molecule, leads to a less stable inclusion compound. Thus stearic acid (C18: 0) forms more stable urea adducts compared to oleic acid (C18: 1 cis -9) or linoleic acid (C18: 2 cis -9, cis -12). A branching in the fatty acid molecule or an autoxidation result in a large deviation from the straight-chain molecular structure, so that these compounds do not form urea adducts. This is used as part of the fatty acid analysis and for the separation or enrichment of specific fatty acids. [ 11 ]
For the separation of n-paraffins from other hydrocarbon compounds, urea is added with an approximately 20-fold molar excess. The urea crystallizes in a hexagonal crystal structure with about 5.5 to 5.8 Å wide channels. In these channels the n-paraffins are included. If the concentration of n-paraffins in the mixture is too high, the mixture is diluted with a solvent.
In general, the reaction proceeds according to the scheme:
The equilibrium of the reaction is dependent on the concentrations of the reactants, the solvent and the temperature. [ 12 ] The necessary quantity of urea for the formation of inclusion compounds varies from about 1 to 0.8 mole of urea per methyl- and methylene group in a carbon chain. [ 12 ] The urea is added as a supersaturated aqueous solution to compensate for losses due to adduct formation during the process. In order to avoid a too high concentrations of adducts in the dewaxed oil a solvent such as methyl isobutyl ketone or methylene chloride is used for dilution. The ratio of oil to water phase is about 1 to 0.5. The mixing of the oil and water phases occurs at slightly elevated temperatures of about 35 °C. In the course of the reaction the mixture is cooled to room temperature. Lower temperatures are advantageous for the formation of inclusion complexes. [ 12 ]
The urea-paraffin-adduct can be filtered off and thereby separated from the iso-paraffins and other non-paraffinic components. By washing with a solvent a solid adduct residue is obtained. The washing of the clathrates with hot water at about 75 °C breaks up the clathrates and releases the paraffins. The obtained n-paraffins have a purity of about 99%. Losses of urea are small, the hot urea solution can be returned directly back into the process.
|
https://en.wikipedia.org/wiki/Urea_extraction_crystallization
|
Urea nitrate is a fertilizer-based high explosive that has been used in improvised explosive devices in Afghanistan , Pakistan , Iraq , and various terrorist acts elsewhere in the world such as in the 1993 World Trade Center bombings . [ 2 ] It has a destructive power similar to better-known ammonium nitrate explosives, with a velocity of detonation between 3,400 m/s (11,155 ft/s) and 4,700 m/s (15,420 ft/s). [ 3 ] It has chemical formula of CH 5 N 3 O 4 or (NH 2 ) 2 COHNO 3 .
Urea nitrate is produced in one step by reaction of urea with nitric acid . This is an exothermic reaction , so steps must be taken to control the temperature.
It was discovered in 1797 by William Cruickshank , [ 4 ] inventor of the Chloralkali process .
Urea nitrate explosions may be initiated using a blasting cap . [ 3 ]
Urea contains a carbonyl group . The more electronegative oxygen atom pulls electrons away from the carbon atom, forming a polar bond with greater electron density around the oxygen atom, giving it a partial negative charge. In a simplistic sense, nitric acid dissociates in aqueous solution into protons (hydrogen cations) and nitrate anions. The electrophilic proton contributed by the acid is attracted to the negatively charged oxygen atom on the urea molecule and the two form a covalent bond. The formed O-H bond is stabilized into a hydroxyl group when the oxygen abstracts an electron pair away from the central carbon atom, which leads to bond resonance between it and the two amino groups. As such, the urea cation can be thought of as a amidinium species. Paired with the spectator nitrate counteranion, it forms urea nitrate.
The compound is favored by many amateur explosive enthusiasts as a principal explosive for use in larger charges. In this role it acts as a substitute for ammonium nitrate based explosives. This is due to the ease of acquiring the materials necessary to synthesize it, and its greater sensitivity to initiation compared to ammonium nitrate based explosives.
|
https://en.wikipedia.org/wiki/Urea_nitrate
|
Urea perchlorate is a sheet-shaped crystallite with good chemical stability and strong hygroscopicity. It has usage as an oxidizer in liquid explosives [ 1 ] including underwater blasting . [ 2 ]
The compound is synthesized by gradual addition of urea into a perchloric acid solution:
An alternative route is addition of urea to hydrochloric acid solution, followed by addition of sodium perchlorate, and filtration of the salt.
|
https://en.wikipedia.org/wiki/Urea_perchlorate
|
The Urech hydantoin synthesis is the chemical reaction of amino acids with potassium cyanate and hydrochloric acid to give hydantoins . [ 1 ] [ 2 ]
|
https://en.wikipedia.org/wiki/Urech_hydantoin_synthesis
|
Urediniospores (or uredospores ) are thin-walled spores produced by the uredium , a stage in the life-cycle of rusts .
Urediniospores develop in the uredium , generally on a leaf's under surface.
This Basidiomycota -related article is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Urediniospore
|
A urethrovaginal fistula is an abnormal passageway that may occur the urethra and the vagina. [ 1 ] It is a sub-set of vaginal fistulas. [ 2 ] [ 3 ] It results in urinary incontinence as urine continually leaves the vagina. It can occur as an obstetrical complication , catheter insertion injury or a surgical injury . [ 4 ] [ 5 ]
It is also called a urethral fistula and may be referred to as UVF. [ 3 ] [ 6 ] They are quite rare. In the developed world, they are typically due to injuries due to medical activity. [ 7 ]
This women's health related article is a stub . You can help Wikipedia by expanding it .
This article about a disease of the genitourinary system is a stub . You can help Wikipedia by expanding it .
|
https://en.wikipedia.org/wiki/Urethrovaginal_fistula
|
In stable isotope geochemistry , the Urey–Bigeleisen–Mayer equation , also known as the Bigeleisen–Mayer equation or the Urey model , [ 1 ] is a model describing the approximate equilibrium isotope fractionation in an isotope exchange reaction. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] While the equation itself can be written in numerous forms, it is generally presented as a ratio of partition functions of the isotopic molecules involved in a given reaction. [ 7 ] [ 8 ] The Urey–Bigeleisen–Mayer equation is widely applied in the fields of quantum chemistry and geochemistry and is often modified or paired with other quantum chemical modelling methods (such as density functional theory ) to improve accuracy and precision and reduce the computational cost of calculations. [ 1 ] [ 6 ] [ 9 ]
The equation was first introduced by Harold Urey and, independently, by Jacob Bigeleisen and Maria Goeppert Mayer in 1947. [ 2 ] [ 7 ] [ 8 ]
Since its original descriptions, the Urey–Bigeleisen–Mayer equation has taken many forms. Given an isotopic exchange reaction A + B ∗ = A ∗ + B {\displaystyle A+B^{*}=A^{*}+B} , such that ∗ {\displaystyle ^{*}} designates a molecule containing an isotope of interest, the equation can be expressed by relating the equilibrium constant , K e q {\displaystyle K_{eq}} , to the product of partition function ratios, namely the translational , rotational , vibrational , and sometimes electronic partition functions. [ 10 ] [ 11 ] [ 12 ] Thus the equation can be written as: K e q = [ A ∗ ] [ B ] [ A ] [ B ∗ ] {\displaystyle K_{eq}={\frac {[A^{*}][B]}{[A][B^{*}]}}} where [ A ] = ∏ n Q n , A {\displaystyle [A]=\prod ^{n}Q_{n,A}} and Q n {\displaystyle Q_{n}} is each respective partition function of molecule or atom A {\displaystyle A} . [ 12 ] [ 13 ] It is typical to approximate the rotational partition function ratio as quantized rotational energies in a rigid rotor system. [ 11 ] [ 14 ] The Urey model also treats molecular vibrations as simplified harmonic oscillators and follows the Born–Oppenheimer approximation . [ 11 ] [ 14 ] [ 15 ]
Isotope partitioning behavior is often reported as a reduced partition function ratio , a simplified form of the Bigeleisen–Mayer equation notated mathematically as s s ′ f {\displaystyle {\frac {s}{s'}}f} or ( Q ∗ Q ) r {\displaystyle ({\frac {Q^{*}}{Q}})_{r}} . [ 16 ] [ 17 ] The reduced partition function ratio can be derived from power series expansion of the function and allows the partition functions to be expressed in terms of frequency. [ 16 ] [ 18 ] [ 19 ] It can be used to relate molecular vibrations and intermolecular forces to equilibrium isotope effects. [ 20 ]
As the model is an approximation, many applications append corrections for improved accuracy. [ 15 ] Some common, significant modifications to the equation include accounting for pressure effects, [ 21 ] nuclear geometry, [ 22 ] and corrections for anharmonicity and quantum mechanical effects. [ 1 ] [ 2 ] [ 23 ] [ 24 ] For example, hydrogen isotope exchange reactions have been shown to disagree with the requisite assumptions for the model but correction techniques using path integral methods have been suggested. [ 1 ] [ 8 ] [ 25 ]
One aim of the Manhattan Project was increasing the availability of concentrated radioactive and stable isotopes, in particular 14 C , 35 S , 32 P , and deuterium for heavy water . [ 26 ] Harold Urey , Nobel laureate physical chemist known for his discovery of deuterium, [ 27 ] became its head of isotope separation research while a professor at Columbia University . [ 28 ] [ 29 ] : 45 In 1945, he joined The Institute for Nuclear Studies at the University of Chicago, where he continued to work with chemist Jacob Bigeleisen and physicist Maria Mayer , both also veterans of isotopic research in the Manhattan Project. [ 11 ] [ 28 ] [ 30 ] [ 31 ] In 1946, Urey delivered the Liversidge lecture at the then- Royal Institute of Chemistry , where he outlined his proposed model of stable isotope fractionation. [ 2 ] [ 7 ] [ 11 ] Bigeleisen and Mayer had been working on similar work since at least 1944 and, in 1947, published their model independently from Urey. [ 2 ] [ 8 ] [ 11 ] Their calculations were mathematically equivalent to a 1943 derivation of the reduced partition function by German physicist Ludwig Waldmann . [ 8 ] [ 11 ] [ a ]
Initially used to approximate chemical reaction rates , [ 7 ] [ 8 ] models of isotope fractionation are used throughout the physical sciences . In chemistry , the Urey–Bigeleisen–Mayer equation has been used to predict equilibrium isotope effects and interpret the distributions of isotopes and isotopologues within systems, especially as deviations from their natural abundance . [ 35 ] [ 36 ] The model is also used to explain isotopic shifts in spectroscopy , such as those from nuclear field effects or mass independent effects . [ 1 ] [ 22 ] [ 35 ] In biochemistry, it is used to model enzymatic kinetic isotope effects . [ 37 ] [ 38 ] Simulation testing in computational systems biology often uses the Bigeleisen–Mayer model as a baseline in the development of more complex models of biological systems . [ 39 ] [ 40 ] Isotope fractionation modeling is a critical component of isotope geochemistry and can be used to reconstruct past Earth environments as well as examine surface processes . [ 41 ] [ 42 ] [ 43 ] [ 44 ]
After this paper had been completed, Professor W.F. Libby kindly called a paper by L. Waldmann [ 32 ] to our attention. In this paper, Waldmann discusses briefly the fact that the chemical separation of isotopes is a quantum effect. He gives formulae which are equivalent to our (11') and (11a) and discusses qualitatively their application to two acid-base exchange equilibria. These are the exchange between NH 3 and NH 4 + and HCN and CN – studies by Urey [ 33 ] [ 34 ] and co-workers.
|
https://en.wikipedia.org/wiki/Urey–Bigeleisen–Mayer_equation
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.