text
stringlengths 2
132k
| source
dict |
|---|---|
that using it, shifts are linear transformations: f ( t + Δ t ) = d i a g ( f ( Δ t ) ) f ( t ) {\displaystyle f(t+\Delta t)=\mathrm {diag} (f(\Delta t))f(t)} where Δ t ∈ R {\displaystyle \Delta t\in \mathbb {R} } is the distance one wishes to shift. This allows the transformer to take any encoded position, and find the encoding of the position n-steps-ahead or n-steps-behind, by a matrix multiplication. By taking a linear sum, any convolution can also be implemented as linear transformations: ∑ j c j f ( t + Δ t j ) = ( ∑ j c j d i a g ( f ( Δ t j ) ) ) f ( t ) {\displaystyle \sum _{j}c_{j}f(t+\Delta t_{j})=\left(\sum _{j}c_{j}\,\mathrm {diag} (f(\Delta t_{j}))\right)f(t)} for any constants c j {\displaystyle c_{j}} . This allows the transformer to take any encoded position and find a linear sum of the encoded locations of its neighbors. This sum of encoded positions, when fed into the attention mechanism, would create attention weights on its neighbors, much like what happens in a convolutional neural network language model. In the author's words, "we hypothesized it would allow the model to easily learn to attend by relative position." In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference. === Encoder-decoder (overview) === Like earlier seq2seq models, the original transformer model used an encoder-decoder architecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far. The purpose of
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time). Both the encoder and decoder layers have a feed-forward neural network for additional processing of their outputs and contain residual connections and layer normalization steps. These feed-forward layers contain most of the parameters in a Transformer model. === Feedforward network === The feedforward network (FFN) modules in a Transformer are 2-layered multilayer perceptrons: F F N ( x ) = ϕ ( x W ( 1 ) + b ( 1 ) ) W ( 2 ) + b ( 2 ) {\displaystyle \mathrm {FFN} (x)=\phi (xW^{(1)}+b^{(1)})W^{(2)}+b^{(2)}} where W ( 1 ) {\displaystyle W^{(1)}} and W ( 2 ) {\displaystyle W^{(2)}} are weight matrices and b ( 1 ) {\displaystyle b^{(1)}} and b ( 2 ) {\displaystyle b^{(2)}} are bias vectors, and ϕ {\displaystyle \phi } is its activation function. The original Transformer used ReLU activation. The number of neurons in the middle layer is called intermediate size (GPT), filter size (BERT), or feedforward size (BERT). It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size: d ffn = 4 d emb {\displaystyle d_{\text{ffn}}=4d_{\text{emb}}} . === Scaled dot-product attention === ==== Attention head ==== The attention mechanism used in the Transformer architecture are scaled dot-product attention units. For each unit, the transformer model learns three weight matrices: the query weights W Q
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
{\displaystyle W^{Q}} , the key weights W K {\displaystyle W^{K}} , and the value weights W V {\displaystyle W^{V}} . The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of length ℓ seq, query {\displaystyle \ell _{\text{seq, query}}} , and each entry is a vector of dimension d emb, query {\displaystyle d_{\text{emb, query}}} . Similarly for the key and value sequences. For each vector x i , query {\displaystyle x_{i,{\text{query}}}} in the query sequence, it is multiplied by a matrix W Q {\displaystyle W^{Q}} to produce a query vector q i = x i , query W Q {\displaystyle q_{i}=x_{i,{\text{query}}}W^{Q}} . The matrix of all query vectors is the query matrix: Q = X query W Q {\displaystyle Q=X_{\text{query}}W^{Q}} Similarly, we construct the key matrix K = X key W K {\displaystyle K=X_{\text{key}}W^{K}} and the value matrix V = X value W V {\displaystyle V=X_{\text{value}}W^{V}} . It is usually the case that all W Q , W K , W V {\displaystyle W^{Q},W^{K},W^{V}} are square matrices, meaning d emb, query = d query {\displaystyle d_{\text{emb, query}}=d_{\text{query}}} , etc. Attention weights are calculated using the query and key vectors: the attention weight a i j {\displaystyle a_{ij}} from token i {\displaystyle i} to token j {\displaystyle j} is the dot product between q i {\displaystyle q_{i}} and k j {\displaystyle k_{j}} . The attention weights are divided by the square root of the dimension of the key vectors, d k {\displaystyle {\sqrt {d_{k}}}} , which stabilizes gradients during training, and passed through a softmax which normalizes the weights. The fact that W Q {\displaystyle W^{Q}} and W K {\displaystyle W^{K}} are different matrices allows attention to be non-symmetric: if token i {\displaystyle i} attends to token j {\displaystyle j}
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
(i.e. q i ⋅ k j {\displaystyle q_{i}\cdot k_{j}} is large), this does not necessarily mean that token j {\displaystyle j} will attend to token i {\displaystyle i} (i.e. q j ⋅ k i {\displaystyle q_{j}\cdot k_{i}} could be small). The output of the attention unit for token i {\displaystyle i} is the weighted sum of the value vectors of all tokens, weighted by a i j {\displaystyle a_{ij}} , the attention from token i {\displaystyle i} to each token. The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices Q {\displaystyle Q} , K {\displaystyle K} and V {\displaystyle V} are defined as the matrices where the i {\displaystyle i} th rows are vectors q i {\displaystyle q_{i}} , k i {\displaystyle k_{i}} , and v i {\displaystyle v_{i}} respectively. Then we can represent the attention as Attention ( Q , K , V ) = softmax ( Q K T d k ) V {\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}} where the softmax is applied over each of the rows of the matrix. The number of dimensions in a query vector is query size d query {\displaystyle d_{\text{query}}} and similarly for the key size d key {\displaystyle d_{\text{key}}} and value size d value {\displaystyle d_{\text{value}}} . The output dimension of an attention head is its head dimension d head {\displaystyle d_{\text{head}}} . The attention mechanism requires the following three equalities to hold: ℓ seq, key = ℓ seq, value , d query = d key , d value = d head {\displaystyle \ell _{\text{seq, key}}=\ell _{\text{seq, value}},\;d_{\text{query}}=d_{\text{key}},\;d_{\text{value}}=d_{\text{head}}} but is otherwise unconstrained. If the attention head is used in a self-attention fashion, then
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
X query = X key = X value {\displaystyle X_{\text{query}}=X_{\text{key}}=X_{\text{value}}} . If the attention head is used in a cross-attention fashion, then usually X query ≠ X key = X value {\displaystyle X_{\text{query}}\neq X_{\text{key}}=X_{\text{value}}} . It is theoretically possible for all three to be different, but that is rarely the case in practice. ==== Multiheaded attention ==== One set of ( W Q , W K , W V ) {\displaystyle \left(W^{Q},W^{K},W^{V}\right)} matrices is called an attention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matrices, W Q {\displaystyle W^{Q}} and W K {\displaystyle W^{K}} , which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrix W V {\displaystyle W^{V}} , in combination with the part of the output projection matrix W O {\displaystyle W^{O}} , determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects. The computations for each attention head can be performed in parallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into the feed-forward neural network layers. Concretely, let the multiple attention heads be
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
indexed by i {\displaystyle i} , then we have MultiheadedAttention ( Q , K , V ) = Concat i ∈ [ n heads ] ( Attention ( Q W i Q , K W i K , V W i V ) ) W O {\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}({\text{Attention}}(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}))W^{O}} where the matrix X {\displaystyle X} is the concatenation of word embeddings, and the matrices W i Q , W i K , W i V {\displaystyle W_{i}^{Q},W_{i}^{K},W_{i}^{V}} are "projection matrices" owned by individual attention head i {\displaystyle i} , and W O {\displaystyle W^{O}} is a final projection matrix owned by the whole multi-headed attention head. It is theoretically possible for each attention head to have a different head dimension d head {\displaystyle d_{\text{head}}} , but that is rarely the case in practice. As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions: d emb = 768 , n head = 12 , d head = 64 {\displaystyle d_{\text{emb}}=768,n_{\text{head}}=12,d_{\text{head}}=64} Since 12 × 64 = 768 {\displaystyle 12\times 64=768} , its output projection matrix W O ∈ R ( 12 × 64 ) × 768 {\displaystyle W^{O}\in \mathbb {R} ^{(12\times 64)\times 768}} is a square matrix. ==== Masked attention ==== The Transformer architecture is constructed to calculate output tokens iteratively. Assuming t = 0 {\displaystyle t=0} refers to the calculation of the first output token i = 0 {\displaystyle i=0} , for step t > 0 {\displaystyle t>0} , the output token i = 0 {\displaystyle i=0} shall remain constant. This ensures properties of the model similar to autoregressive models. Therefore, at every time step t {\displaystyle t} , the calculation for all outputs i {\displaystyle i} should not have access to tokens at position j {\displaystyle j} for j >= i {\displaystyle
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
j>=i} (as it naturally is the case for time step t = i {\displaystyle t=i} , when tokens j > t {\displaystyle j>t} are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrix M {\displaystyle M} that is − ∞ {\displaystyle -\infty } at entries where the attention link must be cut, and 0 {\displaystyle 0} at other places: MaskedAttention ( Q , K , V ) = softmax ( M + Q K T d k ) V {\displaystyle {\begin{aligned}{\text{MaskedAttention}}(Q,K,V)={\text{softmax}}\left(M+{\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\end{aligned}}} The following matrix is commonly used in decoder self-attention modules, called "causal masking": M causal = [ 0 − ∞ − ∞ … − ∞ 0 0 − ∞ … − ∞ 0 0 0 … − ∞ ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 … 0 ] {\displaystyle M_{\text{causal}}={\begin{bmatrix}0&-\infty &-\infty &\dots &-\infty \\0&0&-\infty &\dots &-\infty \\0&0&0&\dots &-\infty \\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\dots &0\end{bmatrix}}} In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, the XLNet considers all masks of the form P M causal P − 1 {\displaystyle PM_{\text{causal}}P^{-1}} , where P {\displaystyle P} is a random permutation matrix. === Encoder === An encoder consists of an embedding layer, followed by multiple encoder layers. Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
given input vectors h 0 , h 1 , … combine them into a matrix H = [ h 0 h 1 ⋮ ] EncoderLayer ( H ) = [ FFN ( MultiheadedAttention ( H , H , H ) 0 ) FFN ( MultiheadedAttention ( H , H , H ) 1 ) ⋮ ] {\displaystyle {\begin{aligned}{\text{given input vectors }}&h_{0},h_{1},\dots \\{\text{combine them into a matrix }}H&={\begin{bmatrix}h_{0}\\h_{1}\\\vdots \end{bmatrix}}\\{\text{EncoderLayer}}(H)&={\begin{bmatrix}{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{0})\\{\text{FFN}}({\text{MultiheadedAttention}}(H,H,H)_{1})\\\vdots \end{bmatrix}}\\\end{aligned}}} where FFN {\displaystyle {\text{FFN}}} stands for "feed-forward network". We can more succinctly write it as EncoderLayer ( H ) = FFN ( MultiheadedAttention ( H , H , H ) ) {\displaystyle {\text{EncoderLayer}}(H)={\text{FFN}}({\text{MultiheadedAttention}}(H,H,H))} with the implicit convention that the FFN {\displaystyle {\text{FFN}}} is applied to each row of the matrix individually. The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder. As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking. === Decoder === A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer. Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention. Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow. This allows for autoregressive text generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked. In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism. Schematically, we have: H ′ = MaskedMultiheadedAttention ( H , H , H ) DecoderLayer ( H ) = FFN ( MultiheadedAttention ( H ′ , H E , H E ) ) {\displaystyle {\begin{aligned}H'&={\text{MaskedMultiheadedAttention}}(H,H,H)\\{\text{DecoderLayer}}(H)&={\text{FFN}}({\text{MultiheadedAttention}}(H',H^{E},H^{E}))\end{aligned}}} where H E {\displaystyle H^{E}} is the matrix with rows being the output vectors from the encoder. The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text. === Adapted architectures === Many large language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. Early GPT models are decoder-only models trained to predict the next token in a sequence. BERT, another language model, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence. == Full transformer architecture == === Sublayers === Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network. The final points of detail
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
are the residual connections and layer normalization (LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence. The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to zero. Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector. There are two common conventions in use: the post-LN and the pre-LN convention. In the post-LN convention, the output of each sublayer is L a y e r N o r m ( x + S u b l a y e r ( x ) ) {\displaystyle \mathrm {LayerNorm} (x+\mathrm {Sublayer} (x))} where S u b l a y e r ( x ) {\displaystyle \mathrm {Sublayer} (x)} is the function implemented by the sublayer itself. In the pre-LN convention, the output of each sublayer is x + S u b l a y e r ( L a y e r N o r m ( x ) ) {\displaystyle x+\mathrm {Sublayer} (\mathrm {LayerNorm} (x))} The original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018, was found to be easier to train, requiring no warm-up, leading to faster convergence. === Pseudocode === The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from input: Encoder
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
input t_e Decoder input t_d output: Array of probability distributions, with shape (decoder vocabulary size x length(decoder output sequence)) /* encoder */ z_e ← encoder.tokenizer(t_e) for each t in 1:length(z_e) do z_e[t] ← encoder.embedding(z_e[t]) + encoder.positional_embedding(t) for each l in 1:length(encoder.layers) do layer ← encoder.layers[l] /* first sublayer */ z_e_copy ← copy(z_e) for each t in 1:length(z_e) do z_e[t] ← layer.layer_norm(z_e[t]) z_e ← layer.multiheaded_attention(z_e, z_e, z_e) for each t in 1:length(z_e) do z_e[t] ← z_e[t] + z_e_copy[t] /* second sublayer */ z_e_copy ← copy(z_e) for each t in 1:length(z_e) do z_e[t] ← layer.layer_norm(z_e[t]) z_e ← layer.feedforward(z_e) for each t in 1:length(z_e) do z_e[t] ← z_e[t] + z_e_copy[t] for each t in 1:length(z_e) do z_e[t] ← encoder.final_layer_norm(z_e[t]) /* decoder */ z_d ← decoder.tokenizer(t_d) for each t in 1:length(z_d) do z_d[t] ← decoder.embedding(z_d[t]) + decoder.positional_embedding(t) for each l in 1:length(decoder.layers) do layer ← decoder.layers[l] /* first sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.masked_multiheaded_attention(z_d, z_d, z_d) for each t in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] /* second sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.multiheaded_attention(z_d, z_e, z_e) for each i in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] /* third sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.feedforward(z_d) for each t in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] z_d ← decoder.final_layer_norm(z_d) output_distributions ← [] for each t in 1:length(z_d) do output_distributions.append(decoder.unembed(z_d[t])) return output_distributions === Terminology === The Transformer architecture, being modular, allows variations. Several common variations are described here. An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding and representation learning for downstream
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
applications. BERT is encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder. A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used for text generation and instruction following. The models in the GPT series and Chinchilla series are decoder-only. An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such as alternative activation functions, changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in the T5 series are encoder-decoder. A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form: Figure 3 M prefixLM = [ 0 − ∞ 0 M causal ] {\displaystyle M_{\text{prefixLM}}={\begin{bmatrix}\mathbf {0} &-\infty \\\mathbf {0} &M_{\text{causal}}\end{bmatrix}}} where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons. There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively. == Subsequent work == === Alternative activation functions === The original transformer uses ReLU activation function. Other activation functions were developed. The Llama
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
series and PaLM used SwiGLU; both GPT-1 and BERT used GELU. Alternative activation functions are often used in combination with Gated Linear Units in the feedforward module. === Alternative normalizations === The normalization used in the Transformer can be different from LayerNorm. One example is RMSNorm which is used in the Llama series. Other examples include CapsuleNorm ScaleNorm, or FixNorm. === Alternative positional encodings === Transformers may use other positional encoding methods than sinusoidal. The original Transformer paper reported using a learned positional encoding, but finding it not superior to the sinusoidal one. Later, found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module. ==== RoPE ==== RoPE (rotary positional embedding), is best explained by considering a list of 2-dimensional vectors [ ( x 1 ( 1 ) , x 1 ( 2 ) ) , ( x 2 ( 1 ) , x 2 ( 2 ) ) , ( x 3 ( 1 ) , x 3 ( 2 ) ) , . . . ] {\displaystyle [(x_{1}^{(1)},x_{1}^{(2)}),(x_{2}^{(1)},x_{2}^{(2)}),(x_{3}^{(1)},x_{3}^{(2)}),...]} . Now pick some angle θ {\displaystyle \theta } . Then RoPE encoding is RoPE ( x m ( 1 ) , x m ( 2 ) , m ) = ( cos m θ − sin m θ sin m θ cos m θ ) ( x m ( 1 ) x m ( 2 ) ) = ( x m ( 1 ) cos m θ − x m ( 2 ) sin m θ x m ( 2 ) cos m θ + x m ( 1 ) sin m θ ) {\displaystyle {\text{RoPE}}{\big (}x_{m}^{(1)},x_{m}^{(2)},m{\big )}={\begin{pmatrix}\cos m\theta &-\sin m\theta \\\sin m\theta &\cos
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
m\theta \end{pmatrix}}{\begin{pmatrix}x_{m}^{(1)}\\x_{m}^{(2)}\\\end{pmatrix}}={\begin{pmatrix}x_{m}^{(1)}\cos m\theta -x_{m}^{(2)}\sin m\theta \\x_{m}^{(2)}\cos m\theta +x_{m}^{(1)}\sin m\theta \\\end{pmatrix}}} Equivalently, if we write the 2-dimensional vectors as complex numbers z m := x m ( 1 ) + i x m ( 2 ) {\displaystyle z_{m}:=x_{m}^{(1)}+ix_{m}^{(2)}} , then RoPE encoding is just multiplication by an angle: RoPE ( z m , m ) = e i m θ z m {\displaystyle {\text{RoPE}}{\big (}z_{m},m{\big )}=e^{im\theta }z_{m}} For a list of 2 n {\displaystyle 2n} -dimensional vectors, a RoPE encoder is defined by a sequence of angles θ ( 1 ) , . . . , θ ( n ) {\displaystyle \theta ^{(1)},...,\theta ^{(n)}} . Then the RoPE encoding is applied to each pair of coordinates. The benefit of RoPE is that the dot-product between two vectors depends on their relative location only: RoPE ( x , m ) T RoPE ( y , n ) = RoPE ( x , m + k ) T RoPE ( y , n + k ) {\displaystyle {\text{RoPE}}{\big (}x,m{\big )}^{T}{\text{RoPE}}{\big (}y,n{\big )}={\text{RoPE}}{\big (}x,m+k{\big )}^{T}{\text{RoPE}}{\big (}y,n+k{\big )}} for any integer k {\displaystyle k} . ==== ALiBi ==== ALiBi (Attention with Linear Biases) is not a replacement for the positional encoder on the original transformer. Instead, it is an additional positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism is Attention ( Q , K , V ) = softmax ( Q K T d k + s B ) V {\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+sB\right)V\end{aligned}}} Here, s {\displaystyle s} is a real number ("scalar"), and B {\displaystyle B} is the linear bias matrix defined by B = ( 0 1 2 3 ⋯ − 1 0 1 2 ⋯ − 2 − 1 0 1 ⋯ − 3 − 2 − 1 0 ⋯ ⋮ ⋮ ⋮
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
⋮ ⋱ ) {\displaystyle B={\begin{pmatrix}0&1&2&3&\cdots \\-1&0&1&2&\cdots \\-2&-1&0&1&\cdots \\-3&-2&-1&0&\cdots \\\vdots &\vdots &\vdots &\vdots &\ddots \\\end{pmatrix}}} in other words, B i , j = j − i {\displaystyle B_{i,j}=j-i} . The idea being that the linear bias matrix is a softened mask. Just as 0 {\displaystyle 0} represent full attention paid, and − ∞ {\displaystyle -\infty } represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction. ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located). ==== Relative Position Encodings ==== Relative Position Encodings is similar to ALiBi, but more generic: Attention ( Q , K , V ) = softmax ( Q K T d k + B ) V {\displaystyle {\begin{aligned}{\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}+B\right)V\end{aligned}}} where B {\displaystyle B} is a Toeplitz matrix, that is, B i , j = B i ′ , j ′ {\displaystyle B_{i,j}=B_{i',j'}} whenever i − j = i ′ − j ′ {\displaystyle i-j=i'-j'} . This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding". === Efficient implementation === The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. Transformers is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models. ==== KV caching ==== When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. The KV
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
caching method saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token. PagedAttention applies memory paging to KV caching. If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots. ==== FlashAttention ==== FlashAttention is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page on softmax for details. An improved version, FlashAttention-2, was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention. Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA). Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8. ==== Multi-Query Attention ==== Multi-Query Attention changes the multiheaded attention mechanism. Whereas normally, MultiheadedAttention ( Q , K , V ) = Concat i
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
∈ [ n heads ] ( Attention ( X W i Q , X W i K , X W i V ) ) W O {\displaystyle {\text{MultiheadedAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW_{i}^{K},XW_{i}^{V})\right)W^{O}} with Multi-Query Attention, there is just one W K , W V {\displaystyle W^{K},W^{V}} , thus: MultiQueryAttention ( Q , K , V ) = Concat i ∈ [ n heads ] ( Attention ( X W i Q , X W K , X W V ) ) W O {\displaystyle {\text{MultiQueryAttention}}(Q,K,V)={\text{Concat}}_{i\in [n_{\text{heads}}]}\left({\text{Attention}}(XW_{i}^{Q},XW^{K},XW^{V})\right)W^{O}} This has a neutral effect on model quality and training speed, but increases inference speed. More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups. Multihead Latent Attention (MLA) is a low-rank approximation to standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached. ==== Speculative decoding ==== Speculative decoding is a method to accelerate token decoding. Similarly to speculative execution in CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they are discarded and computed slowly. The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense. Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a token x 1 , x 2 , . . . ,
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
x 512 {\displaystyle x_{1},x_{2},...,x_{512}} , taking time 512 T GPT-3 {\displaystyle 512T_{\text{GPT-3}}} . However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that each x t {\displaystyle x_{t}} is indeed the token with the largest log-likelihood in the t {\displaystyle t} -th output. In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculative tokens: x ~ 1 , x ~ 2 , x ~ 3 , x ~ 4 {\displaystyle {\tilde {x}}_{1},{\tilde {x}}_{2},{\tilde {x}}_{3},{\tilde {x}}_{4}} . This only takes 4 T GPT-3-small {\displaystyle 4T_{\text{GPT-3-small}}} . These tokens are then run through the larger GPT-3 in one go. Suppose that x ~ 1 {\displaystyle {\tilde {x}}_{1}} and x ~ 2 {\displaystyle {\tilde {x}}_{2}} are verified by GPT-3 as what it would have picked, then those are kept, but x ~ 3 {\displaystyle {\tilde {x}}_{3}} is not, so x ~ 3 , x ~ 4 {\displaystyle {\tilde {x}}_{3},{\tilde {x}}_{4}} are discarded, and GPT-3 is run on those. This would take 4 T GPT-3-small + 3 T GPT-3 {\displaystyle 4T_{\text{GPT-3-small}}+3T_{\text{GPT-3}}} , which might be shorter than 4 T GPT-3 {\displaystyle 4T_{\text{GPT-3}}} . For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used. In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict the next token, and so on
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transformer block, rather than the entire stack. === Sub-quadratic transformers === Training transformer-based architectures can be expensive, especially for long inputs. Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows. In the audio domain, SepTr decouples the attention in time and frequency domains. Long Range Arena (2020) is a standard benchmark for comparing the behavior of transformer architectures over long inputs. ==== Alternative attention graphs ==== The standard attention graph is either all-to-all or causal, both of which scales as O ( N 2 ) {\displaystyle O(N^{2})} where N {\displaystyle N} is the number of tokens in a sequence. Reformer (2020) reduces the computational load from O ( N 2 ) {\displaystyle O(N^{2})} to O ( N ln N ) {\displaystyle O(N\ln N)} by using locality-sensitive hashing and reversible layers. Sparse attention uses attention graphs that grows slower than O ( N 2 ) {\displaystyle O(N^{2})} . For example, BigBird (2020) uses random small-world networks which grows as O ( N ) {\displaystyle O(N)} . Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value. ==== Random Feature Attention ==== Random Feature Attention (2021) uses Fourier random features: φ ( x ) = 1 D [ cos ⟨ w 1 , x ⟩ , sin ⟨ w 1 , x ⟩ , ⋯ cos ⟨ w D , x ⟩ , sin ⟨ w D , x
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
⟩ ] T {\displaystyle \varphi (x)={\frac {1}{\sqrt {D}}}[\cos \langle w_{1},x\rangle ,\sin \langle w_{1},x\rangle ,\cdots \cos \langle w_{D},x\rangle ,\sin \langle w_{D},x\rangle ]^{T}} where w 1 , . . . , w D {\displaystyle w_{1},...,w_{D}} are independent samples from the normal distribution N ( 0 , σ 2 I ) {\displaystyle N(0,\sigma ^{2}I)} . This choice of parameters satisfy E [ ⟨ φ ( x ) , φ ( y ) ⟩ ] = e − ‖ x − y ‖ 2 2 σ 2 {\displaystyle \mathbb {E} [\langle \varphi (x),\varphi (y)\rangle ]=e^{-{\frac {\|x-y\|^{2}}{2\sigma ^{2}}}}} , or e ⟨ x , y ⟩ / σ 2 = E [ ⟨ e ‖ x ‖ 2 / 2 σ 2 φ ( x ) , e ‖ y ‖ 2 / 2 σ 2 φ ( y ) ⟩ ] ≈ ⟨ e ‖ x ‖ 2 / 2 σ 2 φ ( x ) , e ‖ y ‖ 2 / 2 σ 2 φ ( y ) ⟩ {\displaystyle e^{\langle x,y\rangle /\sigma ^{2}}=\mathbb {E} [\langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle ]\approx \langle e^{\|x\|^{2}/2\sigma ^{2}}\varphi (x),e^{\|y\|^{2}/2\sigma ^{2}}\varphi (y)\rangle } Consequently, the one-headed attention, with one query, can be written as Attention ( q , K , V ) = softmax ( q K T d k ) V ≈ φ ( q ) T ∑ i e ‖ k i ‖ 2 / 2 σ 2 φ ( k i ) v i T φ ( q ) T ∑ i e ‖ k i ‖ 2 / 2 σ 2 φ ( k i ) {\displaystyle {\text{Attention}}(q,K,V)={\text{softmax}}\left({\frac {qK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx {\frac {\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})v_{i}^{T}}{\varphi (q)^{T}\sum _{i}e^{\|k_{i}\|^{2}/2\sigma ^{2}}\varphi (k_{i})}}} where σ = d K 1 / 4 {\displaystyle \sigma =d_{K}^{1/4}} . Similarly for multiple queries, and for multiheaded attention.
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
This approximation can be computed in linear time, as we can compute the matrix φ ( k i ) v i T {\displaystyle \varphi (k_{i})v_{i}^{T}} first, then multiply it with the query. In essence, we have managed to obtain a more precise version of Attention ( Q , K , V ) = softmax ( Q K T d k ) V ≈ Q ( K T V / d k ) {\displaystyle {\text{Attention}}(Q,K,V)={\text{softmax}}\left({\frac {QK^{\mathrm {T} }}{\sqrt {d_{k}}}}\right)V\approx Q(K^{T}V/{\sqrt {d_{k}}})} Performer (2022) uses the same Random Feature Attention, but w 1 , . . . , w D {\displaystyle w_{1},...,w_{D}} are first independently sampled from the normal distribution N ( 0 , σ 2 I ) {\displaystyle N(0,\sigma ^{2}I)} , then they are Gram-Schmidt processed. === Multimodality === Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality. Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning. The LLaVA was a vision-language model composed of a language model (Vicuna-13B) and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned. Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer. Conformer and later Whisper follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer.
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
Perceivers are a variant of Transformers designed for multimodality. For image generation, notable architectures are DALL-E 1 (2021), Parti (2022), Phenaki (2023), and Muse (2023). Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image. Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image. Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted. Phenaki is a text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video. == Applications == The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, Gemini, AlbertAGPT, Claude, BERT, Grok, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world applications, including: machine translation time series prediction document summarization document generation named entity recognition (NER) writing computer code based on requirements expressed in natural language. speech-to-text Beyond traditional NLP, the transformer architecture has had success in other applications, such as: biological sequence analysis video understanding protein folding (such as AlphaFold) evaluating chess board positions. Using static evaluation alone (that is, with no Minimax search) transformer achieved an Elo of 2895, putting it at grandmaster level. == See also == seq2seq – Family of machine learning approaches Perceiver – Variant of Transformer designed for multimodal data Vision transformer – Machine learning model
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
for vision processing Large language model – Type of machine learning model BERT (language model) – Series of language models developed by Google AI Generative pre-trained transformer – Type of large language model T5 (language model) – Series of large language models developed by Google AI == Notes == == References == == Further reading ==
|
{
"page_id": 61603971,
"source": null,
"title": "Transformer (deep learning architecture)"
}
|
Eerik Kumari born Erik Mathias Sits (7 March 1912 – 8 January 1984) was a biologist, and pioneer of ornithology and nature conservation in Estonia. He was born in Kirbla, Lihula Parish. He was the director of the Institute of Zoology and Botany at the Estonian Academy of Sciences from 1952 to 1977. He was the president of the Estonian Naturalists' Society from 1954 to 1964. The Eerik Kumari Award was established in 1989 in his name to honor those who have excelled in biology in Estonia. == References == Eerik Kumari memorial collection Archived 2007-06-09 at the Wayback Machine
|
{
"page_id": 13697159,
"source": null,
"title": "Eerik Kumari"
}
|
The one gene–one enzyme hypothesis is the idea that genes act through the production of enzymes, with each gene responsible for producing a single enzyme that in turn affects a single step in a metabolic pathway. The concept was proposed by George Beadle and Edward Tatum in an influential 1941 paper on genetic mutations in the mold Neurospora crassa, and subsequently was dubbed the "one gene–one enzyme hypothesis" by their collaborator Norman Horowitz. In 2004, Horowitz reminisced that "these experiments founded the science of what Beadle and Tatum called 'biochemical genetics.' In actuality they proved to be the opening gun in what became molecular genetics and all the developments that have followed from that." The development of the one gene–one enzyme hypothesis is often considered the first significant result in what came to be called molecular biology. Although it has been extremely influential, the hypothesis was recognized soon after its proposal to be an oversimplification. Even the subsequent reformulation of the "one gene–one polypeptide" hypothesis is now considered too simple to describe the relationship between genes and proteins. == Origin == Although some instances of errors in metabolism following Mendelian inheritance patterns were known earlier, beginning with the 1902 identification by Archibald Garrod of alkaptonuria as a Mendelian recessive trait, for the most part genetics could not be applied to metabolism through the late 1930s. Another of the exceptions was the work of Boris Ephrussi and George Beadle, two geneticists working on the eye color pigments of Drosophila melanogaster fruit flies in the Caltech laboratory of Thomas Hunt Morgan. In the mid-1930s they found that genes affecting eye color appeared to be serially dependent, and that the normal red eyes of Drosophila were the result of pigments that went through a series of transformations; different eye color gene mutations disrupted
|
{
"page_id": 20971660,
"source": null,
"title": "One gene–one enzyme hypothesis"
}
|
the transformations at a different points in the series. Thus, Beadle reasoned that each gene was responsible for an enzyme acting in the metabolic pathway of pigment synthesis. However, because it was a relatively superficial pathway rather than one shared widely by diverse organisms, little was known about the biochemical details of fruit fly eye pigment metabolism. Studying that pathway in more detail required isolating pigments from the eyes of flies, an extremely tedious process. After moving to Stanford University in 1937, Beadle began working with biochemist Edward Tatum to isolate the fly eye pigments. After some success with this approach—they identified one of the intermediate pigments shortly after another researcher, Adolf Butenandt, beat them to the discovery—Beadle and Tatum switched their focus to an organism that made genetic studies of biochemical traits much easier: the bread mold Neurospora crassa, which had recently been subjected to genetic research by one of Thomas Hunt Morgan's researchers, Carl C. Lingegren. Neurospora had several advantages: it required a simple growth medium, it grew quickly, and because of the production of ascospores during reproduction it was easy to isolate genetic mutants for analysis. They produced mutations by exposing the fungus to X-rays, and then identified strains that had metabolic defects by varying the growth medium. This work of Beadle and Tatum led almost at once to an important generalization. This was that most mutants unable to grow on minimal medium but able to grow on “complete” medium each require addition of only one particular supplement for growth on minimal medium. If the synthesis of a particular nutrient (such as an amino acid or vitamin) was disrupted by mutation, that mutant strain could be grown by adding the necessary nutrient to the medium. This finding suggested that most mutations affected only a single metabolic pathway.
|
{
"page_id": 20971660,
"source": null,
"title": "One gene–one enzyme hypothesis"
}
|
Further evidence obtained soon after the initial findings tended to show that generally only a single step in the pathway is blocked. Following their first report of three such auxotroph mutants in 1941, Beadle and Tatum used this method to create series of related mutants and determined the order in which amino acids and some other metabolites were synthesized in several metabolic pathways. The obvious inference from these experiments was that each gene mutation affects the activity of a single enzyme. This led directly to the one gene–one enzyme hypothesis, which, with certain qualifications and refinements, has remained essentially valid to the present day. As recalled by Horowitz et al., the work of Beadle and Tatum also demonstrated that genes have an essential role in biosyntheses. At the time of the experiments (1941), non-geneticists still generally believed that genes governed only trivial biological traits, such as eye color, and bristle arrangement in fruit flies, while basic biochemistry was determined in the cytoplasm by unknown processes. Also, many respected geneticists thought that gene action was far too complicated to be resolved by any simple experiment. Thus Beadle and Tatum brought about a fundamental revolution in our understanding of genetics, for which they were awarded a Nobel Prize in Physiology or Medicine in 1958. The nutritional mutants of Neurospora also proved to have practical applications; in one of the early, if indirect, examples of military funding of science in the biological sciences, Beadle garnered additional research funding (from the Rockefeller Foundation and an association of manufacturers of military rations) to develop strains that could be used to assay the nutrient content of foodstuffs, to ensure adequate nutrition for troops in World War II. == The hypothesis and alternative interpretations == In their first Neurospora paper, published in the November 15, 1941, edition
|
{
"page_id": 20971660,
"source": null,
"title": "One gene–one enzyme hypothesis"
}
|
of the Proceedings of the National Academy of Sciences, Beadle and Tatum noted that it was "entirely tenable to suppose that these genes which are themselves a part of the system, control or regulate specific reactions in the system either by acting directly as enzymes or by determining the specificities of enzymes", an idea that had been suggested, though with limited experimental support, as early as 1917; they offered new evidence to support that view, and outlined a research program that would enable it to be explored more fully. By 1945, Beadle, Tatum and others, working with Neurospora and other model organisms such as E. coli, had produced considerable experimental evidence that each step in a metabolic pathway is controlled by a single gene. In a 1945 review, Beadle suggested that "the gene can be visualized as directing the final configuration of a protein molecule and thus determining its specificity." He also argued that "for reasons of economy in the evolutionary process, one might expect that with few exceptions the final specificity of a particular enzyme would be imposed by only one gene." At the time, genes were widely thought to consist of proteins or nucleoproteins (although the Avery–MacLeod–McCarty experiment and related work was beginning to cast doubt on that idea). However, the proposed connection between a single gene and a single protein enzyme outlived the protein theory of gene structure. In a 1948 paper, Norman Horowitz named the concept the "one gene–one enzyme hypothesis". Although influential, the one gene–one enzyme hypothesis was not unchallenged. Among others, Max Delbrück was skeptical only a single enzyme was actually involved at each step along metabolic pathways. For many who did accept the results, it strengthened the link between genes and enzymes, so that some biochemists thought that genes were enzymes; this was
|
{
"page_id": 20971660,
"source": null,
"title": "One gene–one enzyme hypothesis"
}
|
consistent with other work, such as studies of the reproduction of tobacco mosaic virus (which was known to have heritable variations and which followed the same pattern of autocatalysis as many enzymatic reactions) and the crystallization of that virus as an apparently pure protein. At the start of the 1950s, the Neurospora findings were widely admired, but the prevailing view in 1951 was that the conclusion Beadle had drawn from them was a vast oversimplification. Beadle wrote in 1966, that after reading the 1951 Cold Spring Harbor Symposium on Genes and Mutations, he had the impression that supporters of the one gene–one enzyme hypothesis “could be counted on the fingers of one hand with a couple of fingers left over.” By the early 1950s, most biochemists and geneticists considered DNA the most likely candidate for physical basis of the gene, and the one gene–one enzyme hypothesis was reinterpreted accordingly. === One gene–one polypeptide === In attributing an instructional role to genes, Beadle and Tatum implicitly accorded genes an informational capability. This insight provided the foundation for the concept of a genetic code. However, it was not until the experiments were performed showing that DNA was the genetic material, that proteins consist of a defined linear sequence of amino acids, and that DNA structure contained a linear sequence of base pairs, was there a clear basis for solving the genetic code. By the early 1950s, advances in biochemical genetics—spurred in part by the original hypothesis—made the one gene–one enzyme hypothesis seem very unlikely (at least in its original form). Beginning in 1957, Vernon Ingram and others showed through electrophoresis and 2D chromatography that genetic variations in proteins (such as sickle cell hemoglobin) could be limited to differences in just a single polypeptide chain in a multimeric protein, leading to a "one
|
{
"page_id": 20971660,
"source": null,
"title": "One gene–one enzyme hypothesis"
}
|
gene–one polypeptide" hypothesis instead. According to geneticist Rowland H. Davis, "By 1958 – indeed, even by 1948 – one gene, one enzyme was no longer a hypothesis to be resolutely defended; it was simply the name of a research program." Presently, the one gene–one polypeptide perspective cannot account for the various spliced versions in many eukaryote organisms which use a spliceosome to individually prepare a RNA transcript depending on the various inter- and intra-cellular environmental signals. This splicing was discovered in 1977 by Phillip Sharp and Richard J. Roberts == Possible anticipation of Beadle and Tatum's results == Historian Jan Sapp has studied the controversy in regard to German geneticist Franz Moewus who, as some leading geneticists of the 1940s and 50s argued, generated similar results before Beadle and Tatum's celebrated 1941 work. Working on the algae Chlamydomonas, Moewus published, in the 1930s, results that showed that different genes were responsible for different enzymatic reactions in the production of hormones that controlled the organism's reproduction. However, as Sapp skillfully details, those results were challenged by others who found the data 'too good to be true' statistically, and the results could not be replicated. == See also == Edward Lawrie Tatum George Wells Beadle Neurospora crassa Norman Horowitz (geneticist) Central dogma of molecular biology == References == Fruton JS (1999). Proteins, Enzymes, Genes: The Interplay of Chemistry and Biology. New Haven: Yale University Press. ISBN 0-300-07608-8. Kay LE (1993). The Molecular Vision of Life: Caltech, The Rockefeller Foundation, and the Rise of the New Biology. New York: Oxford University Press. ISBN 0-19-511143-5. Morange M (1998). A History of Molecular Biology. Cobb M (trans.). Cambridge: Harvard University Press. ISBN 0-674-39855-6. == Further reading == Hickman M, Cairns J (2003). "The Centenary of the One-Gene One-Enzyme Hypothesis". Genetics. 163 (3): 839–841. doi:10.1093/genetics/163.3.839. PMC
|
{
"page_id": 20971660,
"source": null,
"title": "One gene–one enzyme hypothesis"
}
|
1462495. PMID 12663526. Horowitz NH (1995). "One-Gene-One-Enzyme: Remembering Biochemical Genetics". Protein Science. 4 (5): 1017–1019. doi:10.1002/pro.5560040524. PMC 2143113. PMID 7663338.
|
{
"page_id": 20971660,
"source": null,
"title": "One gene–one enzyme hypothesis"
}
|
Post-transcriptional regulation is the control of gene expression at the RNA level. It occurs once the RNA polymerase has been attached to the gene's promoter and is synthesizing the nucleotide sequence. Therefore, as the name indicates, it occurs between the transcription phase and the translation phase of gene expression. These controls are critical for the regulation of many genes across human tissues. It also plays a big role in cell physiology, being implicated in pathologies such as cancer and neurodegenerative diseases. == Mechanism == After being produced, the stability and distribution of the different transcripts is regulated (post-transcriptional regulation) by means of RNA binding protein (RBP) that control the various steps and rates controlling events such as alternative splicing, nuclear degradation (exosome), processing, nuclear export (three alternative pathways), sequestration in P-bodies for storage or degradation and ultimately translation. These proteins achieve these events thanks to an RNA recognition motif (RRM) that binds a specific sequence or secondary structure of the transcripts, typically at the 5’ and 3’ UTR of the transcript. In short, the dsRNA sequences, which will be broken down into siRNA inside of the organism, will match up with the RNA to inhibit the gene expression in the cell. Modulating the capping, splicing, addition of a Poly(A) tail, the sequence-specific nuclear export rates and in several contexts sequestration of the RNA transcript occurs in eukaryotes but not in prokaryotes. This modulation is a result of a protein or transcript which in turn is regulated and may have an affinity for certain sequences. Capping changes the five prime end of the mRNA to a three prime end by 5'-5' linkage, which protects the mRNA from 5' exonuclease, which degrades foreign RNA. The cap also helps in ribosomal binding. In addition, it represents a unique mark for a correct gene.
|
{
"page_id": 16908428,
"source": null,
"title": "Post-transcriptional regulation"
}
|
Therefore, it helps to select the mRNA that is going to be translated. RNA splicing removes the introns, noncoding regions that are transcribed into RNA, in order to make the mRNA able to create proteins. Cells do this by spliceosomes binding on either side of an intron, looping the intron into a circle and then cleaving it off. The two ends of the exons are then joined. Addition of poly(A) tail otherwise known as polyadenylation. That is, a stretch of RNA that is made solely of adenine bases is added to the 3' end, and acts as a buffer to the 3' exonuclease in order to increase the half-life of mRNA. In addition, a long poly(A) tail can increase translation. Poly(A)-binding protein (PABP) binds to a long poly(A) tail and mediates the interaction between EIF4E and EIF4G which encourages the initiation of translation. RNA editing is a process which results in sequence variation in the RNA molecule, and is catalyzed by enzymes. These enzymes include the adenosine deaminase acting on RNA (ADAR) enzymes, which convert specific adenosine residues to inosine in an mRNA molecule by hydrolytic deamination. Three ADAR enzymes have been cloned, ADAR1, ADAR2 and ADAR3, although only the first two subtypes have been shown to have RNA editing activity. Many mRNAs are vulnerable to the effects of RNA editing, including the glutamate receptor subunits GluR2, GluR3, GluR4, GluR5 and GluR6 (which are components of the AMPA and kainate receptors), the serotonin2C receptor, the GABA-alpha3 receptor subunit, the tryptophan hydroxylase enzyme TPH2, the hepatitis delta virus and more than 16% of microRNAs. In addition to ADAR enzymes, CDAR enzymes exist and these convert cytosines in specific RNA molecules, to uracil. These enzymes are termed 'APOBEC' and have genetic loci at 22q13, a region close to the chromosomal deletion which
|
{
"page_id": 16908428,
"source": null,
"title": "Post-transcriptional regulation"
}
|
occurs in velocardiofacial syndrome (22q11) and which is linked to psychosis. RNA editing is extensively studied in relation to infectious diseases, because the editing process alters viral function. mRNA Stability can be manipulated in order to control its half-life, and the poly(A) tail has some effect on this stability, as previously stated. Stable mRNA can have a half-life of up to a day or more which allows for the production of more protein product; unstable mRNA is used in regulation that must occur quickly. mRNA stability is an important factor that is based on mRNA degradation rates. Nuclear export. Only one-twentieth of the total amount of RNA leaves the nucleus to proceed with translation. The rest of the RNA molecules, usually excised introns and damaged RNAs, are kept in the nucleus where they are eventually degraded. mRNA only leaves the nucleus when it is ready to keep going, which means that nuclear export is delayed until the processing is complete. As an interesting fact, there are some mechanisms that attack this nuclear export process to regulate gene expression. An example of regulated nuclear transport of mRNA can be observed in HIV. == Transcription attenuation == Transcription attenuation is a type of prokaryotic regulation that happens only under certain conditions. This process occurs at the beginning of RNA transcription and causes the RNA chain to terminate before gene expression. Transcription attenuation is caused by the incorrect formation of a nascent RNA chain. This nascent RNA chain adopts an alternative secondary structure that does not interact appropriately with the RNA polymerase. In order for gene expression to proceed, regulatory proteins must bind to the RNA chain and remove the attenuation, which is costly for the cell. In prokaryotes there are two mechanisms of transcription attenuation. These two mechanisms are intrinsic termination and
|
{
"page_id": 16908428,
"source": null,
"title": "Post-transcriptional regulation"
}
|
factor-dependent termination. - In the intrinsic termination mechanism, also known as Rho-independent termination, the RNA chain forms a stable transcript hairpin structure at the 3'end of the genes that cause the RNA polymerase to stop transcribing. The stem-loop is followed by a run of U's (poly U tail) which stalls the polymerase, so the RNA hairpin have enough time to form. Then, the polymerase is dissociated due to the weak binding between the poly U tail, from the transcript RNA, and the poly A tail, from the DNA template, causing the mRNA to be prematurely released. This process inhibits transcription. To clarify, this mechanism is called Rho-independent because it does not require any additional protein factor as the factor-dependent termination does, which is a simpler mechanism for the cell to regulate gene transcription. Some examples of bacteria where this type of regulation predominates are Neisseria, Psychrobacter and Pasteurellaceae, as well as the majority of bacteria in the Firmicutes phylum. - In factor-dependent termination, which is a protein factor complex containing Rho factor, is bound to a segment from the RNA chain transcript. The Rho complex then starts looking in the 3' direction for a paused RNA polymerase. If the polymerase is found, the process immediately stops, which results in the abortion of RNA transcription. Even though this system is not as common as the one described above, there are some bacteria that uses this type of termination, such as the tna operon in E.coli. This type of regulation is not efficient in eukaryotes because transcription occurs in the nucleus while translation occurs in the cytoplasm. Therefore, the mechanism is not continued and it cannot execute appropriately as it would if both processes happen on the cytoplasm. == MicroRNA mediated regulation == MicroRNAs (miRNAs) appear to regulate the expression of more
|
{
"page_id": 16908428,
"source": null,
"title": "Post-transcriptional regulation"
}
|
than 60% of protein coding genes of the human genome. If an miRNA is abundant it can behave as a "switch", turning some genes on or off. However, altered expression of many miRNAs only leads to a modest 1.5- to 4-fold change in protein expression of their target genes. Individual miRNAs often repress several hundred target genes. Repression usually occurs either through translational silencing of the mRNA or through degradation of the mRNA, via complementary binding, mostly to specific sequences in the 3' untranslated region of the target gene's mRNA. The mechanism of translational silencing or degradation of mRNA is implemented through the RNA-induced silencing complex (RISC). == Feedback in the regulation of RNA binding proteins == RNA-Binding Proteins (RBPs) are dynamic assemblages between mRNAs and different proteins that form messenger ribonucleoprotein complexes (mRNPs). These complexes are essential for the regulation of gene expression to ensure that all the steps are performed correctly throughout the whole process. Therefore, they are important control factors for protein levels and cell phenotypes. Moreover, they affect mRNA stability by regulating its conformation due to the environment, stress or extracellular signals. However, their ability to bind and control such a wide variety of RNA targets allows them to form complex regulatory networks (PTRNs).These networks represent a challenge to study each RNA-binding protein individually. Thankfully, due to new methodological advances, the identification of RBPs is slowly expanding, which demonstrates that they are contained in broad families of proteins. RBPs can significantly impact multiple biological processes, and have to be very accurately expressed. Overexpression can change the mRNA target rate, binding to low-affinity RNA sites and causing deleterious results on cellular fitness. Not being able to synthesize at the right level is also problematic because it can lead to cell death. Therefore, RBPs are regulated via auto-regulation,
|
{
"page_id": 16908428,
"source": null,
"title": "Post-transcriptional regulation"
}
|
so they are in control of their own actions. Furthermore, they use both negative feedback, to maintain homeostasis, and positive feedback, to create binary genetic changes in the cell. In metazoans and bacteria, many genes involved in post-post transcriptional regulation are regulated post transcriptionally. For Drosophila RBPs associated with splicing or nonsense mediated decay, analyses of protein-protein and protein-RNA interaction profiles have revealed ubiquitous interactions with RNA and protein products of the same gene. It remains unclear whether these observations are driven by ribosome proximal or ribosome mediated contacts, or if some protein complexes, particularly RNPs, undergo co-translational assembly. == Significance == This area of study has recently gained more importance due to the increasing evidence that post-transcriptional regulation plays a larger role than previously expected. Even though protein with DNA binding domains are more abundant than protein with RNA binding domains, a recent study by Cheadle et al. (2005) showed that during T-cell activation 55% of significant changes at the steady-state level had no corresponding changes at the transcriptional level, meaning they were a result of stability regulation alone. Furthermore, RNA found in the nucleus is more complex than that found in the cytoplasm: more than 95% (bases) of the RNA synthesized by RNA polymerase II never reaches the cytoplasm. The main reason for this is due to the removal of introns which account for 80% of the total bases. Some studies have shown that even after processing the levels of mRNA between the cytoplasm and the nucleus differ greatly. Developmental biology is a good source of models of regulation, but due to the technical difficulties it was easier to determine the transcription factor cascades than regulation at the RNA level. In fact several key genes such as nanos are known to bind RNA but often their targets are
|
{
"page_id": 16908428,
"source": null,
"title": "Post-transcriptional regulation"
}
|
unknown. Although RNA binding proteins may regulate post transcriptionally large amount of the transcriptome, the targeting of a single gene is of interest to the scientific community for medical reasons, this is RNA interference and microRNAs which are both examples of posttranscriptional regulation, which regulate the destruction of RNA and change the chromatin structure. To study post-transcriptional regulation several techniques are used, such as RIP-Chip (RNA immunoprecipitation on chip). == microRNA role in cancer == Deficiency of expression of a DNA repair gene occurs in many cancers (see DNA repair defect and cancer risk and microRNA and DNA repair). Altered microRNA (miRNA) expression that either decreases accurate DNA repair or increases inaccurate microhomology-mediated end joining (MMEJ) DNA repair is often observed in cancers. Deficiency of accurate DNA repair may be a major source of the high frequency of mutations in cancer (see mutation frequencies in cancers). Repression of DNA repair genes in cancers by changes in the levels of microRNAs may be a more frequent cause of repression than mutation or epigenetic methylation of DNA repair genes. For instance, BRCA1 is employed in the accurate homologous recombinational repair (HR) pathway. Deficiency of BRCA1 can cause breast cancer. Down-regulation of BRCA1 due to mutation occurs in about 3% of breast cancers. Down-regulation of BRCA1 due to methylation of its promoter occurs in about 14% of breast cancers. However, increased expression of miR-182 down-regulates BRCA1 mRNA and protein expression, and increased miR-182 is found in 80% of breast cancers. In another example, a mutated constitutively (persistently) expressed version of the oncogene c-Myc is found in many cancers. Among many functions, c-Myc negatively regulates microRNAs miR-150 and miR-22. These microRNAs normally repress expression of two genes essential for MMEJ, Lig3 and Parp1, thereby inhibiting this inaccurate, mutagenic DNA repair pathway. Muvarak et al.
|
{
"page_id": 16908428,
"source": null,
"title": "Post-transcriptional regulation"
}
|
showed, in leukemias, that constitutive expression of c-Myc, leading to down-regulation of miR-150 and miR-22, allowed increased expression of Lig3 and Parp1. This generates genomic instability through increased inaccurate MMEJ DNA repair, and likely contributes to progression to leukemia. To show the frequent ability of microRNAs to alter DNA repair expression, Hatano et al. performed a large screening study, in which 810 microRNAs were transfected into cells that were then subjected to ionizing radiation (IR). For 324 of these microRNAs, DNA repair was reduced (cells were killed more efficiently by IR) after transfection. For a further 75 microRNAs, DNA repair was increased, with less cell death after IR. This indicates that alterations in microRNAs may often down-regulate DNA repair, a likely important early step in progression to cancer. == See also == Cis-regulatory element Glossary of gene expression terms RNA interference == References == == External links == Wormbook.org on RNA-binding protein
|
{
"page_id": 16908428,
"source": null,
"title": "Post-transcriptional regulation"
}
|
Mycobiota (plural noun, no singular) are a group of all the fungi present in a particular geographic region (e.g. "the mycobiota of Ireland") or habitat type (e.g. "the mycobiota of cocoa"). An analogous term for Mycobiota is funga. == Human mycobiota == Mycobiota exist on the surface and in the gastrointestinal system of humans. There are as many as sixty-six genera and 184 species in the gastrointestinal tract of healthy people. Most of these are in the Candida genera. Though found to be present on the skin and in the gi tract in healthy individuals, the normal resident mycobiota can become pathogenic in those who are immunocompromised. Such multispecies infections lead to higher mortalities. In addition hospital-acquired infections by C. albicans have become a cause of major health concerns. A high mortality rate of 40-60% is associated with systemic infection. The best-studied of these are Candida species due to their ability to become pathogenic in immunocompromised and even in healthy hosts. Yeasts are also present on the skin, such as Malassezia species, where they consume oils secreted from the sebaceous glands. Pityrosporum (Malassezia) ovale, which is lipid-dependent and found only on humans. P. ovale was later divided into two species, P. ovale and P. orbiculare, but current sources consider these terms to refer to a single species of fungus, with M. furfur the preferred name. == Other uses == There is a peer reviewed mycological journal titled Mycobiota. == References ==
|
{
"page_id": 40239247,
"source": null,
"title": "Mycobiota"
}
|
Protein electrophoresis is a method for analysing the proteins in a fluid or an extract. The electrophoresis may be performed with a small volume of sample in a number of alternative ways with or without a supporting medium, namely agarose or polyacrylamide. Variants of gel electrophoresis include SDS-PAGE, free-flow electrophoresis, electrofocusing, isotachophoresis, affinity electrophoresis, immunoelectrophoresis, counterelectrophoresis, and capillary electrophoresis. Each variant has many subtypes with individual advantages and limitations. Gel electrophoresis is often performed in combination with electroblotting or immunoblotting to give additional information about a specific protein. == Denaturing gel methods == === SDS-PAGE === SDS-PAGE, sodium dodecyl sulfate polyacrylamide gel electrophoresis, describes a collection of related techniques to separate proteins according to their electrophoretic mobility (a function of the molecular weight of a polypeptide chain) while in the denatured (unfolded) state. In most proteins, the binding of SDS to the polypeptide chain imparts an even distribution of charge per unit mass, thereby resulting in a fractionation by approximate size during electrophoresis. SDS is a strong detergent agent used to denature native proteins to unfolded, individual polypeptides. When a protein mixture is heated to 100 °C in presence of SDS, the detergent wraps around the polypeptide backbone. In this process, the intrinsic charges of polypeptides becomes negligible when compared to the negative charges contributed by SDS. Thus polypeptides after treatment become rod-like structures possessing a uniform charge density, that is same net negative charge per unit length. The electrophoretic mobilities of these proteins will be a linear function of the logarithms of their molecular weights. == Native gel methods == Native gels, also known as non-denaturing gels, analyze proteins that are still in their folded state. Thus, the electrophoretic mobility depends not only on the charge-to-mass ratio, but also on the physical shape and size of the protein. ===
|
{
"page_id": 589968,
"source": null,
"title": "Gel electrophoresis of proteins"
}
|
Blue native PAGE === BN-PAGE is a native PAGE technique, where the Coomassie brilliant blue dye provides the necessary charges to the protein complexes for the electrophoretic separation. The disadvantage of Coomassie is that in binding to proteins it can act like a detergent causing complexes to dissociate. Another drawback is the potential quenching of chemoluminescence (e.g. in subsequent western blot detection or activity assays) or fluorescence of proteins with prosthetic groups (e.g. heme or chlorophyll) or labelled with fluorescent dyes. === Clear native PAGE === CN-PAGE (commonly referred to as Native PAGE) separates acidic water-soluble and membrane proteins in a polyacrylamide gradient gel. It uses no charged dye so the electrophoretic mobility of proteins in CN-PAGE (in contrast to the charge shift technique BN-PAGE) is related to the intrinsic charge of the proteins. The migration distance depends on the protein charge, its size and the pore size of the gel. In many cases this method has lower resolution than BN-PAGE, but CN-PAGE offers advantages whenever Coomassie dye would interfere with further analytical techniques, for example it has been described as a very efficient microscale separation technique for FRET analyses. Additionally, as CN-PAGE does not require the harsh conditions of BN-PAGE, it can retain the supramolecular assemblies of membrane protein complexes that would be dissociated in BN-PAGE. === Preparative native PAGE === The folded protein complexes of interest separate cleanly and predictably without the risk of denaturation due to the specific properties of the polyacrylamide gel, electrophoresis buffer solution, electrophoretic equipment and standard parameters used. The separated proteins are continuously eluted into a physiological eluent and transported to a fraction collector. In four to five PAGE fractions each the different metal cofactors can be identified and absolutely quantified by high-resolution ICP-MS. The associated structures of the isolated metalloproteins in these
|
{
"page_id": 589968,
"source": null,
"title": "Gel electrophoresis of proteins"
}
|
fractions can be specifically determined by solution NMR spectroscopy. == Buffer systems == Most protein separations are performed using a "discontinuous" (or DISC) buffer system that significantly enhances the sharpness of the bands within the gel. During electrophoresis in a discontinuous gel system, an ion gradient is formed in the early stage of electrophoresis that causes all of the proteins to focus into a single sharp band. The formation of the ion gradient is achieved by choosing a pH value at which the ions of the buffer are only moderately charged compared to the SDS-coated proteins. These conditions provide an environment in which Kohlrausch's reactions determine the molar conductivity. As a result, SDS-coated proteins are concentrated to several fold in a thin zone of the order of 19 μm within a few minutes. At this stage all proteins migrate at the same migration speed by isotachophoresis. This occurs in a region of the gel that has larger pores so that the gel matrix does not retard the migration during the focusing or "stacking" event. Separation of the proteins by size is achieved in the lower, "resolving" region of the gel. The resolving gel typically has a much smaller pore size, which leads to a sieving effect that now determines the electrophoretic mobility of the proteins. At the same time, the separating part of the gel also has a pH value in which the buffer ions on average carry a greater charge, causing them to "outrun" the SDS-covered proteins and eliminate the ion gradient and thereby the stacking effect. A very widespread discontinuous buffer system is the tris-glycine or "Laemmli" system that stacks at a pH of 6.8 and resolves at a pH of ~8.3-9.0. A drawback of this system is that these pH values may promote disulfide bond formation between
|
{
"page_id": 589968,
"source": null,
"title": "Gel electrophoresis of proteins"
}
|
cysteine residues in the proteins because the pKa of cysteine ranges from 8-9 and because reducing agent present in the loading buffer doesn't co-migrate with the proteins. Recent advances in buffering technology alleviate this problem by resolving the proteins at a pH well below the pKa of cysteine (e.g., bis-tris, pH 6.5) and include reducing agents (e.g. sodium bisulfite) that move into the gel ahead of the proteins to maintain a reducing environment. An additional benefit of using buffers with lower pH values is that the acrylamide gel is more stable at lower pH values, so the gels can be stored for long periods of time before use. === SDS gradient gel electrophoresis of proteins === As voltage is applied, the anions (and negatively charged sample molecules) migrate toward the positive electrode (anode) in the lower chamber, the leading ion is Cl− ( high mobility and high concentration); glycinate is the trailing ion (low mobility and low concentration). SDS-protein particles do not migrate freely at the border between the Cl− of the gel buffer and the Gly− of the cathode buffer. Friedrich Kohlrausch found that Ohm's law also applies to dissolved electrolytes. Because of the voltage drop between the Cl− and Glycine-buffers, proteins are compressed (stacked) into micrometer thin layers. The boundary moves through a pore gradient and the protein stack gradually disperses due to a frictional resistance increase of the gel matrix. Stacking and unstacking occurs continuously in the gradient gel, for every protein at a different position. For a complete protein unstacking the polyacrylamide-gel concentration must exceed 16% T. The two-gel system of "Laemmli" is a simple gradient gel. The pH discontinuity of the buffers is of no significance for the separation quality, and a "stacking-gel" with a different pH is not needed. == Visualization == The most
|
{
"page_id": 589968,
"source": null,
"title": "Gel electrophoresis of proteins"
}
|
popular protein stain is Coomassie brilliant blue. It is an anionic dye, which non-specifically binds to proteins. Proteins in the gel are fixed by acetic acid and simultaneously stained. The excess dye incorporated into the gel can be removed by destaining with the same solution without the dye. The proteins are detected as blue bands on a clear background. When more sensitive method than staining by Coomassie is needed, silver staining is usually used. Silver staining is a sensitive procedure to detect trace amounts of proteins in gels, but can also visualize nucleic acid or polysaccharides. Visualization methods without using a dye such as Coomassie and silver are available on the market. For example Bio-Rad Laboratories markets ”stain-free” gels for SDS-PAGE gel electrophoresis. Alternatively, reversible fluorescent dyes, such as those from Azure Biosystems such as AzureRed or Azure TotalStain Q can be used. Similarly as in nucleic acid gel electrophoresis, tracking dye is often used. Anionic dyes of a known electrophoretic mobility are usually included in the sample buffer. A very common tracking dye is Bromophenol blue. This dye is coloured at alkali and neutral pH and is a small negatively charged molecule that moves towards the anode. Being a highly mobile molecule it moves ahead of most proteins. == Medical applications == In medicine, protein electrophoresis is a method of analysing the proteins mainly in blood serum. Before the widespread use of gel electrophoresis, protein electrophoresis was performed as free-flow electrophoresis (on paper) or as immunoelectrophoresis. Traditionally, two classes of blood proteins are considered: serum albumin and globulin. They are generally equal in proportion, but albumin as a molecule is much smaller and lightly, negatively-charged, leading to an accumulation of albumin on the electrophoretic gel. A small band before albumin represents transthyretin (also named prealbumin). Some forms of medication
|
{
"page_id": 589968,
"source": null,
"title": "Gel electrophoresis of proteins"
}
|
or body chemicals can cause their own band, but it usually is small. Abnormal bands (spikes) are seen in monoclonal gammopathy of undetermined significance and multiple myeloma, and are useful in the diagnosis of these conditions. The globulins are classified by their banding pattern (with their main representatives): The alpha (α) band consists of two parts, 1 and 2: α1 - α1-antitrypsin, α1-acid glycoprotein. α2 - haptoglobin, α2-macroglobulin, α2-antiplasmin, ceruloplasmin. The beta (β) band - transferrin, LDL, complement The gamma (γ) band - immunoglobulin (IgA, IgD, IgE, IgG and IgM). Paraproteins (in multiple myeloma) usually appear in this band. == See also == Affinity electrophoresis Electroblotting Electrofocusing Fast parallel proteolysis (FASTpp) Gel electrophoresis Immunoelectrophoresis Immunofixation Native gel electrophoresis Paraprotein QPNC-PAGE SDD-AGE == References == == External links == Educational resource for protein electrophoresis Gel electrophoresis of proteins Archived 2021-01-26 at the Wayback Machine
|
{
"page_id": 589968,
"source": null,
"title": "Gel electrophoresis of proteins"
}
|
The Camille and Henry Dreyfus Foundation is a New York City-based foundation founded in 1946 by chemist and investor Camille Dreyfus in honour of his brother, Henry Dreyfus. The two men invented the acetate yarn Celanese, and Henry Dreyfus was founder and chairman of British Celanese, parent of the Celanese Corporation of America. Following Camille's death in 1956, his wife, the opera singer Jean Tennyson, served as the foundation's president until her death in 1991. In 1971, the foundation sold a significant part of its holdings in the Celanese company. The foundation makes grants and awards prizes in support of chemistry research and education. These prizes include the Dreyfus Prize in the Chemical Sciences, Camille Dreyfus Teacher-Scholar Awards, Henry Dreyfus Teacher-Scholar Awards, Machine Learning in the Chemical Sciences and Engineering, Jean Dreyfus Lectureship for Undergraduate Institutions. The foundation also sponsors two awards through the American Chemical Society: the ACS Award for Encouraging Women into Careers in the Chemical Sciences, and the ACS Award for Encouraging Disadvantaged Students into Careers in the Chemical Sciences. == Dreyfus Prize in the Chemical Sciences == The Dreyfus Prize in the Chemical Sciences is an award given to an individual researcher in chemistry. The prize, awarded biennially, consists of a citation, a medal, and a monetary award of $250,000. The prize is awarded by The Camille and Henry Dreyfus Foundation, Inc. to an individual in a selected area of chemistry "to recognize exceptional and original research that has advanced the field in a major way." == Camille Dreyfus Teacher-Scholar Awards == The Camille Dreyfus Teacher-Scholar Awards are awards given to early-career researchers in chemistry "to support the research and teaching careers of talented young faculty in the chemical sciences... who demonstrate leadership in research and education." The Dreyfus Teacher-Scholar program began in 1970. In 1994,
|
{
"page_id": 26804367,
"source": null,
"title": "The Camille and Henry Dreyfus Foundation"
}
|
the program was divided into two parallel awards: The Camille Dreyfus Teacher-Scholar Awards Program, aimed at research universities, and the Henry Dreyfus Teacher-Scholar Awards Program, directed at primarily undergraduate institutions. The annually presented awards consist of a monetary prize of $75,000, which was increased to $100,000 starting in 2019. Seven winners of the Camille Dreyfus Teacher-Scholar Awards have gone on to win the Nobel Prize in Chemistry, including Paul L. Modrich, Richard R. Schrock, Robert H. Grubbs, K. Barry Sharpless, Ahmed H. Zewail, Mario J. Molina and Yuan Tseh Lee. == Henry Dreyfus Teacher-Scholar Awards == The Camille Dreyfus Teacher-Scholar Awards are awards given to faculty at primarily undergraduate institutions (PUIs) "to support the research and teaching careers of talented young faculty in the chemical sciences at undergraduate institutions." The annually presented awards consist of a monetary prize of $75,000. == Machine Learning in the Chemical Sciences and Engineering == The Machine Learning in the Chemical Sciences and Engineering Awards are awards "for innovative projects in any area of Machine Learning (ML) consistent with the Foundation’s broad objective to advance the chemical sciences and engineering." They were first awarded in 2020. == Jean Dreyfus Lectureship for Undergraduate Institutions == The Jean Dreyfus Lectureship awards "bring a leading researcher to a primarily undergraduate institution (PUI) to give at least two lectures in the chemical sciences." The annually presented awards consist of a monetary prize of $18,500. Before 2016, this Lectureship was known as the Jean Dreyfus Boissevain Lectureship for Undergraduate Institutions. == References == == External links == Official site 40.7612°N 73.973°W / 40.7612; -73.973
|
{
"page_id": 26804367,
"source": null,
"title": "The Camille and Henry Dreyfus Foundation"
}
|
The Storm Book is a 1952 picture book written by Charlotte Zolotow and illustrated by Margaret Bloy Graham. The book tells the story of a summer storm from the perspective of a young boy. The book was a recipient of a 1953 Caldecott Honor for its illustrations. == References ==
|
{
"page_id": 62521490,
"source": null,
"title": "The Storm Book"
}
|
Edward Lawrie Tatum (December 14, 1909 – November 5, 1975) was an American geneticist. He shared half of the Nobel Prize in Physiology or Medicine in 1958 with George Beadle for showing that genes control individual steps in metabolism. The other half of that year's award went to Joshua Lederberg. Tatum was an elected member of the United States National Academy of Sciences, the American Philosophical Society, and the American Academy of Arts and Sciences. == Education == Edward Lawrie Tatum was born on December 14, 1909, in Boulder, Colorado to Arthur L. Tatum and Mabel Webb Tatum. Arthur L. Tatum was a chemistry professor, who by 1925 was a professor of pharmacology at the University of Wisconsin at Madison. Edward Lawrie Tatum attended college at the University of Chicago for two years, before transferring to the University of Wisconsin–Madison, where he received his BA in 1931 and PhD in 1934. His dissertation was Studies in the biochemistry of microorganisms (1934). == Career == Starting in 1937, Tatum worked at Stanford University, where he began his collaboration with Beadle. He then moved to Yale University in 1945 where he mentored Lederberg. He returned to Stanford in 1948 and then joined the faculty of Rockefeller Institute in 1957. He remained there until his death on November 5, 1975, in New York City. A heavy cigarette smoker, Tatum died of heart failure complicated by chronic emphysema. His last wife Elsie Bergland died in 1998. == Research == Tatum and Beadle carried out pioneering studies of biochemical mutations in Neurospora, published in 1941. Their work provided a prototype of the investigation of gene action and a new and effective experimental methodology for the analysis of mutations in biochemical pathways. Beadle and Tatum's key experiments involved exposing the bread mold Neurospora crassa to x-rays,
|
{
"page_id": 917655,
"source": null,
"title": "Edward Tatum"
}
|
causing mutations. In a series of experiments, they showed that these mutations caused changes in specific enzymes involved in metabolic pathways. This led them to propose a direct link between genes and enzymatic reactions, known as the "one gene, one enzyme" hypothesis. Tatum spent his career studying biosynthetic pathways and the genetics of bacteria. An active area of research in his laboratory was to understand the basis of Tryptophan biosynthesis in Escherichia coli. Tatum and his student Joshua Lederberg showed that E. coli could share genetic information through recombination. == Awards and honors == 1959, Member, American Academy of Arts and Sciences. 1958, Nobel Prize in Physiology or Medicine (with George Beadle and Joshua Lederberg) for showing that genes control individual steps in metabolism. 1957, Member, American Philosophical Society, 1952, Member, United States National Academy of Sciences, == References ==
|
{
"page_id": 917655,
"source": null,
"title": "Edward Tatum"
}
|
The molecular formula C8H13NO2 (molar mass: 155.19 g/mol, exact mass: 155.0946 u) may refer to: Arecoline Bemegride Scopine Retronecine
|
{
"page_id": 23920792,
"source": null,
"title": "C8H13NO2"
}
|
The MANIAC is a 2023 novel by Chilean author Benjamín Labatut, written in English. It is a fictionalised biography of polymath John von Neumann, whom Labatut calls "the smartest human being of the 20th century". The book focuses on von Neumann, but is also about physicist Paul Ehrenfest, the history of artificial intelligence, and Lee Sedol's Go match against AlphaGo. The book received mostly positive reviews from critics. == Background == John von Neumann was a Jewish Hungarian-born polymath who was a prodigy from an early childhood. Von Neumann worked in multiple fields of science, theoretical (mathematical foundations of quantum mechanics, game theory, cellular automata) and applied (nuclear weapons research during the Manhattan Project in World War II, computer architecture later named after him, and many other subjects). Labatut calls him "the smartest human being of the 20th century". The title of the book is derived from an early computer based on von Neumann architecture, built after the war at Los Alamos laboratory, called MANIAC I. Benjamín Labatut is a Chilean author known for his 2020 book When We Cease to Understand the World, a collection of fictionalised stories about famous scientists that received positive reviews and was translated into multiple languages from Spanish. The MANIAC is Labatut's first book written in English. In an interview, Labatut said he prefers to write in English: English is my preferred form of thought. ... English is the language I do most if not all my reading it. And it is a far better language than Spanish, in so many ways. Writing "clean" prose in Spanish is almost impossible, because so many of its sounds clash. Borges said that he found English "a far finer language than Spanish" because it's both Germanic and Latin; because of its wonderful vocabulary ("Regal is not exactly
|
{
"page_id": 75694234,
"source": null,
"title": "The MANIAC"
}
|
the same thing as saying kingly," he explained); because of its physicality; and because you can do almost anything with verbs and prepositions. Labatut was inspired to write The MANIAC by George Dyson's book Turing's Cathedral. == Synopsis == The book has three chapters. The first chapter, "Paul or the Discovery of the Irrational", written in the third person, is about physicist Paul Ehrenfest. The chapter opens with Ehrenfest shooting dead his son Vassily, who suffered from Down syndrome, and then himself. It then recounts Ehrenfest's life story, describing his relationships with his wife Tatyana, his mistress Nelly Meyjes, and his eminent physicist colleagues. It chronicles his descent into despair and depression over his marriage's disintegration, the advent of quantum mechanics, and the direction Europe was heading in with the Nazi Party's rise to power in Germany, looping back to the initial scene of the chapter. The second chapter, "John or the Mad Dreams of Reason", is about John von Neumann, and is written as a series of interviews of his family members, wives, friends, and colleagues, each in a distinctive voice. It is divided into three parts. Part I, "The Limits of Logic", is about his early life, as told by von Neumann's childhood friend Eugene Wigner, mother Margrit Kann, brother Nicholas von Neumann, first wife Mariette Kövesi, and scientists Theodore von Karman, George Polya, and Gábor Szegő. It climaxes with von Neumann's participation in David Hilbert's program to create a logical basis for mathematics based on a consistent set of axioms, a quest ultimately scuppered by Kurt Gödel. Part II, "The Delicate Balance of Terror", discusses von Neumann's role in the Manhattan Project (as told by Richard Feynman); his development of game theory and the doctrine of mutual assured destruction (MAD) (as told by Oskar Morgenstern); and his
|
{
"page_id": 75694234,
"source": null,
"title": "The MANIAC"
}
|
creation of the MANIAC I computer and the von Neumann architecture (as told by Julian Bigelow). In Part III, "Ghosts in the Machine", Sydney Brenner discusses von Neumann's contributions to biology, his theoretical work on self-replicating and self-repairing machines, and his vision of Von Neumann probes exploring the universe. Nils Aall Barricelli talks about his ideas of digital life and his disagreements with von Neumann. Von Neumann's wife Klára Dán, daughter Marina, and Wigner talk about his final years, personal life, and death. The third chapter, "Lee or The Delusions of Artificial Intelligence", is about Lee Sedol's Go match against AlphaGo. The narrative reverts to the third person. The chapter also tells the story of Demis Hassabis, a chess prodigy in childhood who decided to work on artificial intelligence and founded DeepMind, the company behind AlphaGo. The way is pointed to the future, as artificial intelligence's growing capabilities outpace the human mind. The book ends with Lee Sedol's retirement from Go, and new version of DeepMind's program, AlphaZero, that did not train on human games but nevertheless became the strongest player in Go, chess, and Shogi. == Reception == The book received mostly positive reviews. In his review for The New York Times Tom McCarthy noted the ambiguity of genre: "At its best, as in the stunning opening sequence reconstructing the murder-suicide of the physicist Paul Ehrenfest and his disabled son, or in the final section's gripping account of a computer defeating the world's best human Go player, you just throw up your hands and think, Who cares what discourse label we assign this stuff? It's great." Becca Rothfeld of the Washington Post praised the book, writing that it is "Labatut's latest virtuosic effort, at once a historical novel and a philosophical foray": "The MANIAC is a work of dark,
|
{
"page_id": 75694234,
"source": null,
"title": "The MANIAC"
}
|
eerie and singular beauty." She noted that the book "can also be difficult to read" because of its unusual narrative structure: "The book is narrated by a cluttered polyphony of characters, among them both of von Neumann's wives and a number of his teachers and colleagues. ... Like von Neumann, The MANIAC strives to adopt the impartial standpoint of the universe." Killian Fox of The Guardian sees the book as "darkly fascinating novel", and notes Labatut's "impressive dexterity, unpicking complex ideas in long, elegant sentences that propel us forward at speed (this is his first book written in English). Even in the more feverish passages, when yet another great mind succumbs to madness, haunted by the spectres they've helped unleash on the world, he feels in full control of his material." Sam Byers of The Guardian praises the book and the author's style: "The opening chapter of Benjamín Labatut's second novel is such a perfect distillation of his technique that it could serve as a manifesto." and "Readers ... will recognise the sense of breathlessness his best writing can evoke. Seemingly loosened from the laws of physics they describe, his sentences range freely through time and space, connecting not only characters and events, but the delicate tissue of intellectual history, often with a lightness of touch that belies their underlying complexity." He writes on the narrative structure: "Through a cascade of staccato chapters, an ensemble of narrators offer their piecemeal insights." Byers adds that "a brilliant novel is not quite what we end up with" and sees the problem in the "diffusion": "Labatut simply spreads himself too thin. Too many years in too few pages; too many voices with far too little to distinguish them. Initially intriguing, the bite-size monologues quickly come to feel inadequate." Some reviewers did not see
|
{
"page_id": 75694234,
"source": null,
"title": "The MANIAC"
}
|
the book as a biography. In an essay for the Cleveland Review of Books, Ben Cosman juxtaposes the book with Christopher Nolan's biopic Oppenheimer, and writes that it "follows the development of artificial intelligence—first as an idea at the beginning of the twentieth century, and then as a practicality at the beginning of the twenty-first—through the lives of three men who faced it." He also compared the book's structure to "witness testimony". Another reviewer called the book "perfect for anyone thirsting for more nuclear anxiety after watching Oppenheimer". Garrett Biggs of the Chicago Review of Books writes of the book's style: "Labatut writes about scientists the way Roberto Bolaño writes about poets. They are near mythical figures, captured at the corner of the novel's eye. They become historical in the most fraught sense of the term: subject to rumor and speculation and, eventually, the novel's form inflates their personas into something so large they can only be understood as narrative, never known in any objective capacity." Biggs criticises the last chapter: "the story of artificial intelligence has yet to be written. And so when Labatut's narration editorializes about artificial intelligence as 'a future that inspires hope and horror,' The MANIAC disassembles as a novel and starts to sound like a stale thinkpiece. AlphaGo might represent the first glimmer of a true artificial intelligence, as Labatut suggests. It also could one day be considered nothing more than a souped-up cousin to IBM's DeepBlue. We just don't know yet." But, Biggs writes, that doesn't "obscure Labatut's own brilliance. His prose is crisp, and he is able to render momentum where many writers might fail." Ed Simon of the Los Angeles Review of Books sees the book not just as a biography of von Neumann, but as a history of artificial intelligence. Simon
|
{
"page_id": 75694234,
"source": null,
"title": "The MANIAC"
}
|
connects the story of AI and von Neumann to an old story of "manufactured men, from Rabbi Judah Loew ben Bezalel's golem to Mary Shelley's monster in Frankenstein ... humanity was haunted by the possibility of artificial intelligence before it ever existed". Simon notes von Neumann's obscurity: If it's true that von Neumann was the most brilliant human of the last century, then he has ironically dwindled into popular obscurity (a lifelong fear of his), perhaps because, as a character, he lacks the doomed romanticism of a J. Robert Oppenheimer, the paranoiac madness of his friend and foil Kurt Gödel, or even the (studied) avuncular saintliness of an Albert Einstein. Far from being an otherworldly anchorite, von Neumann was a womanizer and a drinker, a gambler and a lover of luxury cars, closer to the military brass who gave him a dozen different security clearances to work at the Atomic Energy Commission, the Office of Scientific Research and Development, the Armed Forces Special Weapons Project, and so on, than he was to the genteel physicists of his Austro-Hungarian youth. Yet, as The MANIAC makes clear, von Neumann's was the animating spirit of our current technological epoch, an individual whose dogged work ethic and almost supernatural intelligence made our current moment possible. Simon also sees in the book "Labatut's critique" of "the United States' antinomian rationalism, its instrumental, utilitarian, positivist, rapacious, anarchic logic that so often can appear as its exact opposite". Other reviewers harshly criticised the book. Alun David of The Jewish Chronicle found it "clichéd ... in both thought and style". Multiple reviewers highlighted the last chapter, about Lee Sedol's match against AlphaGo. Rothfeld called it the book's "most extraordinary segment". Biggs finds that it "falls flat when compared to the first two parts of the triptych". Simon writes,
|
{
"page_id": 75694234,
"source": null,
"title": "The MANIAC"
}
|
"That the editors at Penguin were agreeable with Labatut's novel ending in an 84-page (admittedly riveting) synopsis of the strategies that underlie a complex 3,000-year-old Chinese game speaks to an admirable conception of what novels can do, the way that they can be pushed and can in turn push our conceptions." As Labatut's first book was mostly fiction, several reviewers commented on The MANIAC's verifiability. Fox writes, "The details largely conform to what you'll read in the history books, but Labatut affords himself considerable latitude to imagine real lives from the inside." Byers calls it "a semi-fictional oral history"; Cosman writes, "Reading Labatut's nonfiction novels is an exercise in figuring out what is true, what isn't, and how much it matters either way." Simon writes, "The novel is ostensibly historical fiction ... but The MANIAC is actually something far rarer and more unusual—a bona fide experimental novel of ideas that has emerged from a publishing ecosystem that all too often only rewards dry literary fiction or lowest-common-denominator genre fiction. ... The MANIAC's genre is better understood as historical creative nonfiction, philosophical argument, or some conjunction of the two". Labatut himself calls his books "fiction", but says that "All the science ... is true. Yet everything a writer writes is fiction." == References ==
|
{
"page_id": 75694234,
"source": null,
"title": "The MANIAC"
}
|
Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions. For example, the theory implies that the Universe had fewer dimensions after the Big Bang when its energy was high. Then the number of dimensions may have increased as the system cooled and the Universe may gain more dimensions with time. There could have originally been only one spatial dimension, with two dimensions total — one time dimension and one space dimension. When there were only two dimensions, the Universe lacked gravitational degrees of freedom. The theory is also tied to smaller amount of dimensions in smaller systems with the universe expansion being a suggested motivating phenomenon for growth of the number of dimensions with time, suggesting a larger number of dimensions in systems on larger scale. In 2011, Dejan Stojkovic from the University at Buffalo and Jonas Mureika from the Loyola Marymount University described use of a Laser Interferometer Space Antenna system, intended to detect gravitational waves, to test the vanishing-dimension theory by detecting a maximum frequency after which gravitational waves can't be observed. The vanishing-dimensions theory is seen as an explanation to cosmological constant problem: a fifth dimension would answer the question of energy density required to maintain the constant. == References == == Further reading ==
|
{
"page_id": 39452826,
"source": null,
"title": "Vanishing dimensions theory"
}
|
The International Institute of Agriculture (IIA) was the first organization to systematically produce and exchange global data on crops, cultivated land, and trade flows. The late 19th century called for a demand in worldwide data on production, stocks, and consumption, as a rise in agricultural commoditities emerged in global markets. Private actors and services attempted to step in to address these gaps that were present in the agricultural field, but could not fulfill these issues. As a result, governments created the IIA to provide public international statistics, in response to these previous challenges. Their reports on crops and evaluations on international production became crucial components to analyze the global economy and universal agricultural data analytics. After World War II, it was replaced by the Food and Agriculture Organization (FAO) of the United Nations. == History == The IIA was founded in Rome in 1905 by the King of Italy Victor Emmanuel III with the intent of creating a clearinghouse for collection of agricultural statistics, in order to be able to compare countries' agricultural output. In 1904, the idea of such an institute came to David Lubin of Sacramento, California, and the creation of the IIA was primarily due to his efforts. His project found favor with the king of Italy, who was convinced to back the creation of an international organization where governments could share new methods and strategies for agricultural purposes and technological advancements in this field. The latter gave a building in Rome and an annual income of $60,000. The king called the first congress in 1906, and delegates attended from 40 countries. At the congress, a treaty was formed making the institute a permanent organization, defining its scope and activities, and setting international standards in agricultural practices and administration. The IIA's most immediate concern was to create
|
{
"page_id": 10158236,
"source": null,
"title": "International Institute of Agriculture"
}
|
an international centralization initiative to promote the welfare of farmers. Its initial task consisted of gathering annual crop production statistics from nations worldwide, presenting this data as a “single numeric statement,” which indicated the year’s anticipated harvests as a percentage of the previous year’s harvests. In 1930, the IIA published the first world agricultural census. After World War II, both its assets and mandate were handed over to the Food and Agriculture Organization (FAO) of the United Nations. == Main Actors - Italian Government == The idea for the establishment of an international chamber of agriculture was launched by a campaign from the Italian government, who were looking to unify agriculturalists beyond national borders. The main driving forces of the global agricultural revolution, occurring in the second half of the 19th century, were factors such as growth in global trade, decrease in transportation costs, crops and natural resources, and new opportunities in result of an overseas circulation of labor. All of these agricultural components influenced rural transformation and economic expansion, which ultimately led to an urgency in organization of these new dynamics. With the idea of the creation of the IIA in mind, Italy sent diplomats and political experts to mobilize foreign governments in support of the organization's initiative, reinforcing their influence in international relations. Eventually, Italian elites and prominent economists came together to use its transnational networks and expertise to prepare the official report submitted to the Italian government for final approval of the organization. Through promoting and ensuring the success of the newly found international organization, the report was approved, prompting a shift in the communication within the agricultural sector and strengthened the field of foreign policy. == Main Actors - The Ottoman Empire == One of the major contributors who played a significant role in the establishment
|
{
"page_id": 10158236,
"source": null,
"title": "International Institute of Agriculture"
}
|
of the IIA was the Ottoman Empire. In the early activities of the institute, the most active participants were representatives from this region. The contributions and participation from the Ottoman Empire allowed its political officials to assert their position on creating a global standard for data collection, while also broadening their domestic activity and making statistical information of the Ottoman economy available to the public. The Empire's contributions to the formation of the IIA was an influential aspect to their politics due to having an early vote and firm stance on the institute’s international standard setting process. Ultimately, the involvement of the Ottoman Empire not only helped build the foundation principles of the IIA, but also connected them to a global network of agricultural bureaucrats, making it easier to exchange and compare data on the international scale. == Administration == The administration of the IIA was vested in the general assembly of delegates from affiliated countries, meeting every two years, and in a permanent executive committee, on which there was one representative from each country. This permanent committee had direct charge of the IIA. The general officers were the president (also chairman of the permanent committee), the vice president and the secretary general. The work of the institute was divided among four bureaus: Bureau of the secretary general had charge of the personnel, financial and other routine business, the building and its equipment, the printing and distribution of publications, the library and general bibliographical work, and, as a more recent service, the preparation and publication of an annual compilation of agricultural legislation in the different countries of the world. Bureau of general statistics collected, collated and published statistics of production and commerce in agricultural products, both animal and vegetable, throughout the world. Bureau of agricultural intelligence and plant diseases collected
|
{
"page_id": 10158236,
"source": null,
"title": "International Institute of Agriculture"
}
|
and published information regarding the progress of scientific and experimental investigations and practical experience in agriculture throughout the world and, as a branch of this work, gives special attention to the diseases of plants and to entomology. Bureau of economic and social institutions collected and published statistics and general information regarding agricultural co-operation, insurance and credit, together with other matters relating to the economic and social organization of rural communities. The annual budget of the institute was $250,000 (c. 1915), contributed by the adhering governments on the basis of a number of units assigned to each country. == Publications == Those publications of the IIA which had a bearing on the formation of the price of the staples (such as crop reports and data on exports, imports and stocks) were based exclusively on official information, supplied direct to the institute by the adhering governments. Other publications were produced from (a) information officially communicated by the governments, (b) original articles contributed by eminent authorities designated by the adhering governments, (c) excerpts and abstracts of articles translated from the 2,225 official and unofficial periodical publications of the world received by the IIA. The IIA printed and published two annuals and three monthly and one weekly bulletins, together with a considerable number of monographs on special subjects. The annuals dealt with agricultural statistics and legislation, respectively. The monthly bulletins were on (a) agricultural statistics (b) agricultural intelligence and diseases of plants, and (c) economic and social institutions, and the weekly bulletin is bibliographical. The monthly bulletins were published in French, German, English, Spanish, Italian and Hungarian. French being the official language of the IIA, the editions in that language were paid for from the funds of the Institute. Provision for the edition in the other languages was made by the countries interested. The
|
{
"page_id": 10158236,
"source": null,
"title": "International Institute of Agriculture"
}
|
Congress of the United States made an annual appropriation of $5,000 (c. 1915) for translating and printing the English edition, the rest of the expense being borne by Great Britain and her colonies. == Library == The IIA collected a great library of agricultural literature. As the IIA became more firmly established and its value as an international clearing house on economic information was more generally recognized, it was met with a constantly increasing demand for the extension of its service along many lines. After the IIA ceased operations in 1945, its library collection was transferred to David Lubin Memorial Library (DLML) of the FAO. The DLML is open to external visitors. Procedures for library visitors can be found here. == See also == David Lubin Food and Agriculture Organization == References ==
|
{
"page_id": 10158236,
"source": null,
"title": "International Institute of Agriculture"
}
|
Grazing marsh is a British Isles term for flat, marshy grassland in polders. It consists of large grass fields separated by fresh or brackish ditches, and is often important for its wildlife. == History == Grazing marshes were created from medieval times by building sea walls (earth banks) across tidal mudflats and salt marsh to make polders (though the term "polder" is little used in Britain). Polders in Britain are mostly drained by gravity, rather than active pumping. The original tidal drainage channels were augmented by new ditches, and flap valves in the sea walls let water drain out at low tide and prevent the sea or tidal river from entering at high tide. Constructing polders in this way is called inning or reclaiming from the sea. Grazing marshes have been made in most lowland estuaries in Britain, often leaving only the river channel and the lowest part of the estuary tidal. In a few cases (such as Newtown Harbour on the Isle of Wight, and Pagham Harbour in West Sussex) the sea walls have been breached, and the estuaries have returned to a tidal state. Grazing marshes have also been made on low-lying open coasts. Many grazing marshes were inned in stages, and the old sea walls (called counter walls) may be found marooned far from the current sea wall. Land levels on either side of a counter wall often differ by several metres. Paradoxically, the lower side is the land inned earlier, because sediment continued to build up on the side that remained tidal. == Wildlife == Wintering wildfowl are characteristic of grazing marshes, often including large flocks of Eurasian wigeon, brent goose, white-fronted goose and Bewick's swan. Many of these birds are hunted by predators such as peregrine and marsh harrier. In spring, waders such as common
|
{
"page_id": 14680221,
"source": null,
"title": "Grazing marsh"
}
|
redshank, Eurasian curlew, snipe, and northern lapwing breed. The ditches often have a range of salinity, depending on how close to the sea wall they are. The more saline ditches host specialist brackish-water plants and animals. These include, for example, the rare brackish amphipod Gammarus insensibilis and sea club-rush (Bolboschoenus maritimus). Fresher ditches may support rare animals, such as the great silver water beetle (Hydrophilus piceus) and the great raft spider (Dolomedes plantarius), and a wide range of pondweeds (Potamogeton and relatives). The grassland vegetation usually has a fairly small number of species, but those present are often scarce elsewhere, such as sea arrowgrass (Triglochin maritimum), divided sedge (Carex divisa) and strawberry clover (Trifolium fragiferum). == Conservation == Many grazing marshes have been converted into arable land, often using pumped drainage to lower the water levels enough to grow crops, though most are used for grazing cattle. The low ditch levels and agricultural runoff combine to remove much of the aquatic wildlife, although the arable fields may still be used by some wintering wildfowl. Some areas of grazing marsh and other polder land have been used to recreate tidal habitats by a process of managed retreat. Many of the larger areas of grazing marsh bear nature conservation designations, including Site of Special Scientific Interest, Special Protection Area, Special Area of Conservation and Ramsar Site. == Examples of grazing marsh == Pevensey Levels in East Sussex Romney Marsh in Kent and East Sussex The Somerset Levels The Thames Estuary marshes in Kent and Essex Marshes along the River Wantsum in Kent—formerly the Wantsum Channel separating the Isle of Thanet from the mainland Moss Valley, Derbyshire == References ==
|
{
"page_id": 14680221,
"source": null,
"title": "Grazing marsh"
}
|
The molecular formula C10H13N (molar mass 147.219 g/mol, exact mass: 147.1048 u) may refer to: Actinidine 2-Aminotetralin (2-AT or THN) NM-2-AI, or N-methyl-2-aminoindane Phenylbutenamine
|
{
"page_id": 23920796,
"source": null,
"title": "C10H13N"
}
|
The molecular formula C16H10O6 (molar mass: 298.24 g/mol, exact mass: 298.0477 u) may refer to: Irilone Fallacinal
|
{
"page_id": 26345629,
"source": null,
"title": "C16H10O6"
}
|
Count Noble (August 1, 1879 – January 20, 1891) was a dog English Setter. He was so well known that when he died in 1891, The New York Times ran an obituary. He was popularly known as the "$10,000 hunting dog." He was described as a "national symbol of what was great in bird dogs." His owner, Captain Benjamin Frederick Wilson, was a banker and coal barge operator. While he was well known for his hunting prowess and show skills, it was his prepotency, the ability to pass on his best traits to his progeny, that made him the most famous. In 1880, he won the national amateur Derby dog show. He was so famous that owners of other setters refused to compete in shows with him. Other shows offered special inducements in order to encourage his owner to compete. Writing in 1904, Joseph A. Graham gives this description of Count Noble: "A large white-black-tan dog, long in the body and not considered a well proportioned setter. He weighed sixty pounds." A portrait of Count Noble by Edmund Osthaus hangs in the first-floor reading room of the Duquesne Club. Following his death, his preserved body was displayed in the Carnegie Museum of Natural History in a scene showing him hunting quail. The display was moved to The National Bird Dog Museum in Tennessee. In 2011, American Kennel Club judge Richard LeBeau began an effort to raise $2,000 to establish a historical marker honoring Count Noble outside Osborne Elementary School, which stands on the site of Wilson's former home. == References == == External links == A Pedigree of Count Noble Complete Pedigree of Count Noble including Mating Partners and Offspring
|
{
"page_id": 30605472,
"source": null,
"title": "Count Noble"
}
|
The molecular formula C13H9NO (molar mass: 195.22 g/mol, exact mass: 195.0684 u) may refer to: Acridone CR gas, or dibenzoxazepine
|
{
"page_id": 23920799,
"source": null,
"title": "C13H9NO"
}
|
In molecular biology mir-3 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. == See also == MicroRNA == References == == Further reading == == External links == Page for mir-3 microRNA precursor family at Rfam
|
{
"page_id": 36372646,
"source": null,
"title": "Mir-3 microRNA precursor family"
}
|
The molecular formula C7H8N2 (molar mass: 120.15 g/mol, exact mass: 120.0687 u) may refer to: Benzimidazoline Benzamidine
|
{
"page_id": 23920806,
"source": null,
"title": "C7H8N2"
}
|
The molecular formula C14H18O3 (molar mass : 234.29 g/mol) may refer to: Gyrinal, a powerful antiseptic and fish and mammal toxin Stiripentol, an anticonvulsant drug used in the treatment of epilepsy
|
{
"page_id": 23986346,
"source": null,
"title": "C14H18O3"
}
|
The molecular formula C17H26O4 (molar mass 294.38 g/mol, exact mass: 294.1831 u) may refer to: Embelin (2,5-dihydroxy-3-undecyl-1,4-benzoquinone) Gingerol Cineromycin B
|
{
"page_id": 23986347,
"source": null,
"title": "C17H26O4"
}
|
The molecular formula C8H7NS (molar mass: 149.21 g/mol, exact mass: 149.0299 u) may refer to: Benzyl isothiocyanate (BITC) Benzothiazine 2-Mercaptoindole
|
{
"page_id": 23920812,
"source": null,
"title": "C8H7NS"
}
|
Agricultural chemistry is the chemistry, especially organic chemistry and biochemistry, as they relate to agriculture. Agricultural chemistry embraces the structures and chemical reactions relevant in the production, protection, and use of crops and livestock. Its applied science and technology aspects are directed towards increasing yields and improving quality, which comes with multiple advantages and disadvantages. == Agricultural and environmental chemistry == This aspect of agricultural chemistry deals with the role of molecular chemistry in agriculture as well as the negative consequences. === Plant biochemistry === Plant biochemistry encompasses the chemical reactions that occur within plants. In principle, knowledge at a molecular level informs technologies for providing food. Particular focus is on the biochemical differences between plants and other organisms as well as the differences within the plant kingdom, such as dicotyledons vs monocotyledons, gymnosperms vs angiosperms, C2- vs C4-fixers, etc. === Pesticides === Chemical materials developed to assist in the production of food, feed, and fiber include herbicides, insecticides, fungicides, and other pesticides. Pesticides are chemicals that play an important role in increasing crop yield and mitigating crop losses. These work to keep insects and other animals away from crops to allow them to grow undisturbed, effectively regulating pests and diseases. Disadvantages of pesticides include contamination of the ground and water (see persistent organic pollutants). They may be toxic to non-target species, including birds, fish, pollinators, as well as the farmworkers themselves. === Soil chemistry === Agricultural chemistry often aims at preserving or increasing the fertility of soil with the goals of maintaining or improving the agricultural yield and improving the quality of the crop. Soils are analyzed with attention to the inorganic matter (minerals), which comprise most of the mass of dry soil, and organic matter, which consists of living organisms, their degradation products, humic acids and fulvic acids.
|
{
"page_id": 5111982,
"source": null,
"title": "Agricultural chemistry"
}
|
Fertilizers are a major consideration. While organic fertilizers are time-honored, their use has largely been displaced by chemicals produced from mining (phosphate rock) and the Haber-Bosch process. The use of these materials dramatically increased the rate at which crops are produced, which is able to support the growing human population. Common fertilizers include urea, ammonium sulphate, diammonium phosphate, and calcium ammonium phosphate. == Biofuels and bio-derived materials == Agricultural chemistry encompases the science and technology of producing not only edible crops, but feedstocks for fuels ("biofuels") and materials. Ethanol fuel obtained by fermentation of sugars. Biodiesel is derived from fats, both animal- and plant-derived. Methane can be recovered from manure and other ag wastes by microbial action. Lignocellulose is a promising precursor to new materials. == Biotechnology == Biocatalysis is used to produce a number of food products. More than five biilion tons of high fructose corn syrup are produced annually by the action of the immobilized enzyme glucose isomerase of corn-derived glucose. Emerging technologies are numerous, including enzymes for clarifying or debittering of fruit juices. A variety of potentially useful chemicals are obtained by engineered plants. Bioremediation is a green route to biodegradation. === GMOs === Genetically Modified Organisms (GMO's) are plants or living things that have been altered at a genomic level by scientists to improve the organisms characteristics. These characteristics include providing new vaccines for humans, increasing nutrients supplies, and creating unique plastics. They may also be able to grow in climates that are typically not suitable for the original organism to grow in. Examples of GMO's include virus resistant tobacco and squash, delayed ripening tomatoes, and herbicide resistant soybeans. GMO's came with an increased interest in using biotechnology to produce fertilizer and pesticides. Due to an increased market interest in biotechnology in the 1970s, there was
|
{
"page_id": 5111982,
"source": null,
"title": "Agricultural chemistry"
}
|
more technology and infrastructure developed, a decreased cost, and an advance in research. Since the early 1980s, genetically-modified crops have been incorporated. Increased biotechnological work calls for the union of biology and chemistry to produce improved crops, a main reason behind this being the increasing amount of food needed to feed a growing population. That being said, concerns with GMO's include potential antibiotic resistance from eating a GMO. There are also concerns about the long term effects on the human body since many GMO's were recently developed. Much controversy surrounds GMO's. In the United States, all foods containing GMO's must be labeled as such. === Omics === Particularly relevant is proteomics as protein (nutrition) guides much of agriculture. == See also == Agronomy Food science == Notes and references ==
|
{
"page_id": 5111982,
"source": null,
"title": "Agricultural chemistry"
}
|
Monte Carlo in statistical physics refers to the application of the Monte Carlo method to problems in statistical physics, or statistical mechanics. == Overview == The general motivation to use the Monte Carlo method in statistical physics is to evaluate a multivariable integral. The typical problem begins with a system for which the Hamiltonian is known, it is at a given temperature and it follows the Boltzmann statistics. To obtain the mean value of some macroscopic variable, say A, the general approach is to compute, over all the phase space, PS for simplicity, the mean value of A using the Boltzmann distribution: ⟨ A ⟩ = ∫ P S A r → e − β E r → Z d r → {\displaystyle \langle A\rangle =\int _{PS}A_{\vec {r}}{\frac {e^{-\beta E_{\vec {r}}}}{Z}}d{\vec {r}}} . where E ( r → ) = E r → {\displaystyle E({\vec {r}})=E_{\vec {r}}} is the energy of the system for a given state defined by r → {\displaystyle {\vec {r}}} - a vector with all the degrees of freedom (for instance, for a mechanical system, r → = ( q → , p → ) {\displaystyle {\vec {r}}=\left({\vec {q}},{\vec {p}}\right)} ), β ≡ 1 / k b T {\displaystyle \beta \equiv 1/k_{b}T} and Z = ∫ P S e − β E r → d r → {\displaystyle Z=\int _{PS}e^{-\beta E_{\vec {r}}}d{\vec {r}}} is the partition function. One possible approach to solve this multivariable integral is to exactly enumerate all possible configurations of the system, and calculate averages at will. This is done in exactly solvable systems, and in simulations of simple systems with few particles. In realistic systems, on the other hand, an exact enumeration can be difficult or impossible to implement. For those systems, the Monte Carlo integration (and not to be confused with
|
{
"page_id": 8519857,
"source": null,
"title": "Monte Carlo method in statistical mechanics"
}
|
Monte Carlo method, which is used to simulate molecular chains) is generally employed. The main motivation for its use is the fact that, with the Monte Carlo integration, the error goes as 1 / N {\displaystyle 1/{\sqrt {N}}} , independently of the dimension of the integral. Another important concept related to the Monte Carlo integration is the importance sampling, a technique that improves the computational time of the simulation. In the following sections, the general implementation of the Monte Carlo integration for solving this kind of problems is discussed. == Importance sampling == An estimation, under Monte Carlo integration, of an integral defined as ⟨ A ⟩ = ∫ P S A r → e − β E r → d r → / Z {\displaystyle \langle A\rangle =\int _{PS}A_{\vec {r}}e^{-\beta E_{\vec {r}}}d{\vec {r}}/Z} is ⟨ A ⟩ ≃ 1 N ∑ i = 1 N A r → i e − β E r → i / Z {\displaystyle \langle A\rangle \simeq {\frac {1}{N}}\sum _{i=1}^{N}A_{{\vec {r}}_{i}}e^{-\beta E_{{\vec {r}}_{i}}}/Z} where r → i {\displaystyle {\vec {r}}_{i}} are uniformly obtained from all the phase space (PS) and N is the number of sampling points (or function evaluations). From all the phase space, some zones of it are generally more important to the mean of the variable A {\displaystyle A} than others. In particular, those that have the value of e − β E r → i {\displaystyle e^{-\beta E_{{\vec {r}}_{i}}}} sufficiently high when compared to the rest of the energy spectra are the most relevant for the integral. Using this fact, the natural question to ask is: is it possible to choose, with more frequency, the states that are known to be more relevant to the integral? The answer is yes, using the importance sampling technique. Lets assume p ( r
|
{
"page_id": 8519857,
"source": null,
"title": "Monte Carlo method in statistical mechanics"
}
|
→ ) {\displaystyle p({\vec {r}})} is a distribution that chooses the states that are known to be more relevant to the integral. The mean value of A {\displaystyle A} can be rewritten as ⟨ A ⟩ = ∫ P S p − 1 ( r → ) A r → p − 1 ( r → ) e − β E r → / Z d r → = ∫ P S p − 1 ( r → ) A r → ∗ e − β E r → / Z d r → {\displaystyle \langle A\rangle =\int _{PS}p^{-1}({\vec {r}}){\frac {A_{\vec {r}}}{p^{-1}({\vec {r}})}}e^{-\beta E_{\vec {r}}}/Zd{\vec {r}}=\int _{PS}p^{-1}({\vec {r}})A_{\vec {r}}^{*}e^{-\beta E_{\vec {r}}}/Zd{\vec {r}}} , where A r → ∗ {\displaystyle A_{\vec {r}}^{*}} are the sampled values taking into account the importance probability p ( r → ) {\displaystyle p({\vec {r}})} . This integral can be estimated by ⟨ A ⟩ ≃ 1 N ∑ i = 1 N p − 1 ( r → i ) A r → i ∗ e − β E r → i / Z {\displaystyle \langle A\rangle \simeq {\frac {1}{N}}\sum _{i=1}^{N}p^{-1}({\vec {r}}_{i})A_{{\vec {r}}_{i}}^{*}e^{-\beta E_{{\vec {r}}_{i}}}/Z} where r → i {\displaystyle {\vec {r}}_{i}} are now randomly generated using the p ( r → ) {\displaystyle p({\vec {r}})} distribution. Since most of the times it is not easy to find a way of generating states with a given distribution, the Metropolis algorithm must be used. === Canonical === Because it is known that the most likely states are those that maximize the Boltzmann distribution, a good distribution, p ( r → ) {\displaystyle p({\vec {r}})} , to choose for the importance sampling is the Boltzmann distribution or canonic distribution. Let p ( r → ) = e − β E r → Z {\displaystyle p({\vec {r}})={\frac {e^{-\beta
|
{
"page_id": 8519857,
"source": null,
"title": "Monte Carlo method in statistical mechanics"
}
|
E_{\vec {r}}}}{Z}}} be the distribution to use. Substituting on the previous sum, ⟨ A ⟩ ≃ 1 N ∑ i = 1 N A r → i ∗ {\displaystyle \langle A\rangle \simeq {\frac {1}{N}}\sum _{i=1}^{N}A_{{\vec {r}}_{i}}^{*}} . So, the procedure to obtain a mean value of a given variable, using metropolis algorithm, with the canonical distribution, is to use the Metropolis algorithm to generate states given by the distribution p ( r → ) {\displaystyle p({\vec {r}})} and perform means over A r → ∗ {\displaystyle A_{\vec {r}}^{*}} . One important issue must be considered when using the metropolis algorithm with the canonical distribution: when performing a given measure, i.e. realization of r → i {\displaystyle {\vec {r}}_{i}} , one must ensure that that realization is not correlated with the previous state of the system (otherwise the states are not being "randomly" generated). On systems with relevant energy gaps, this is the major drawback of the use of the canonical distribution because the time needed to the system de-correlate from the previous state can tend to infinity. === Multi-canonical === As stated before, micro-canonical approach has a major drawback, which becomes relevant in most of the systems that use Monte Carlo Integration. For those systems with "rough energy landscapes", the multicanonic approach can be used. The multicanonic approach uses a different choice for importance sampling: p ( r → ) = 1 Ω ( E r → ) {\displaystyle p({\vec {r}})={\frac {1}{\Omega (E_{\vec {r}})}}} where Ω ( E ) {\displaystyle \Omega (E)} is the density of states of the system. The major advantage of this choice is that the energy histogram is flat, i.e. the generated states are equally distributed on energy. This means that, when using the Metropolis algorithm, the simulation doesn't see the "rough energy landscape", because every
|
{
"page_id": 8519857,
"source": null,
"title": "Monte Carlo method in statistical mechanics"
}
|
energy is treated equally. The major drawback of this choice is the fact that, on most systems, Ω ( E ) {\displaystyle \Omega (E)} is unknown. To overcome this, the Wang and Landau algorithm is normally used to obtain the DOS during the simulation. Note that after the DOS is known, the mean values of every variable can be calculated for every temperature, since the generation of states does not depend on β {\displaystyle \beta } . == Implementation == On this section, the implementation will focus on the Ising model. Lets consider a two-dimensional spin network, with L spins (lattice sites) on each side. There are naturally N = L 2 {\displaystyle N=L^{2}} spins, and so, the phase space is discrete and is characterized by N spins, r → = ( σ 1 , σ 2 , . . . , σ N ) {\displaystyle {\vec {r}}=(\sigma _{1},\sigma _{2},...,\sigma _{N})} where σ i ∈ { − 1 , 1 } {\displaystyle \sigma _{i}\in \{-1,1\}} is the spin of each lattice site. The system's energy is given by E ( r → ) = ∑ i = 1 N ∑ j ∈ v i z i ( 1 − J i j σ i σ j ) {\displaystyle E({\vec {r}})=\sum _{i=1}^{N}\sum _{j\in viz_{i}}(1-J_{ij}\sigma _{i}\sigma _{j})} , where v i z i {\displaystyle viz_{i}} are the set of first neighborhood spins of i and J is the interaction matrix (for a ferromagnetic ising model, J is the identity matrix). The problem is stated. On this example, the objective is to obtain ⟨ M ⟩ {\displaystyle \langle M\rangle } and ⟨ M 2 ⟩ {\displaystyle \langle M^{2}\rangle } (for instance, to obtain the magnetic susceptibility of the system) since it is straightforward to generalize to other observables. According to the definition, M
|
{
"page_id": 8519857,
"source": null,
"title": "Monte Carlo method in statistical mechanics"
}
|
( r → ) = ∑ i = 1 N σ i {\displaystyle M({\vec {r}})=\sum _{i=1}^{N}\sigma _{i}} . === Canonical === First, the system must be initialized: let β = 1 / k b T {\displaystyle \beta =1/k_{b}T} be the system's Boltzmann temperature and initialize the system with an initial state (which can be anything since the final result should not depend on it). With micro-canonic choice, the metropolis method must be employed. Because there is no right way of choosing which state is to be picked, one can particularize and choose to try to flip one spin at the time. This choice is usually called single spin flip. The following steps are to be made to perform a single measurement. step 1: generate a state that follows the p ( r → ) {\displaystyle p({\vec {r}})} distribution: step 1.1: Perform TT times the following iteration: step 1.1.1: pick a lattice site at random (with probability 1/N), which will be called i, with spin σ i {\displaystyle \sigma _{i}} . step 1.1.2: pick a random number α ∈ [ 0 , 1 ] {\displaystyle \alpha \in [0,1]} . step 1.1.3: calculate the energy change of trying to flip the spin i: Δ E = 2 σ i ∑ j ∈ v i z i σ j {\displaystyle \Delta E=2\sigma _{i}\sum _{j\in viz_{i}}\sigma _{j}} and its magnetization change: Δ M = − 2 σ i {\displaystyle \Delta M=-2\sigma _{i}} step 1.1.4: if α < min ( 1 , e − β Δ E ) {\displaystyle \alpha <\min(1,e^{-\beta \Delta E})} , flip the spin ( σ i = − σ i {\displaystyle \sigma _{i}=-\sigma _{i}} ), otherwise, don't. step 1.1.5: update the several macroscopic variables in case the spin flipped: E = E + Δ E {\displaystyle E=E+\Delta E} , M =
|
{
"page_id": 8519857,
"source": null,
"title": "Monte Carlo method in statistical mechanics"
}
|
M + Δ M {\displaystyle M=M+\Delta M} after TT times, the system is considered to be not correlated from its previous state, which means that, at this moment, the probability of the system to be on a given state follows the Boltzmann distribution, which is the objective proposed by this method. step 2: perform the measurement: step 2.1: save, on a histogram, the values of M and M2. As a final note, one should note that TT is not easy to estimate because it is not easy to say when the system is de-correlated from the previous state. To surpass this point, one generally do not use a fixed TT, but TT as a tunneling time. One tunneling time is defined as the number of steps 1. the system needs to make to go from the minimum of its energy to the maximum of its energy and return. A major drawback of this method with the single spin flip choice in systems like Ising model is that the tunneling time scales as a power law as N 2 + z {\displaystyle N^{2+z}} where z is greater than 0.5, phenomenon known as critical slowing down. == Applicability == The method thus neglects dynamics, which can be a major drawback, or a great advantage. Indeed, the method can only be applied to static quantities, but the freedom to choose moves makes the method very flexible. An additional advantage is that some systems, such as the Ising model, lack a dynamical description and are only defined by an energy prescription; for these the Monte Carlo approach is the only one feasible. == Generalizations == The great success of this method in statistical mechanics has led to various generalizations such as the method of simulated annealing for optimization, in which a fictitious temperature is
|
{
"page_id": 8519857,
"source": null,
"title": "Monte Carlo method in statistical mechanics"
}
|
introduced and then gradually lowered. == See also == Monte Carlo integration Metropolis algorithm Importance sampling Quantum Monte Carlo Monte Carlo molecular modeling == References == Allen, M.P. & Tildesley, D.J. (1987). Computer Simulation of Liquids. Oxford University Press. ISBN 0-19-855645-4. Frenkel, D. & Smit, B. (2001). Understanding Molecular Simulation. Academic Press. ISBN 0-12-267351-4. Binder, K. & Heermann, D.W. (2002). Monte Carlo Simulation in Statistical Physics. An Introduction (4th ed.). Springer. ISBN 3-540-43221-3. Spanier, Jerome; Gelbard, Ely M. (2008). "Importance Sampling". Monte Carlo Principles and Neutron Transport Problems. Dover. pp. 110–124. ISBN 978-0-486-46293-6.
|
{
"page_id": 8519857,
"source": null,
"title": "Monte Carlo method in statistical mechanics"
}
|
Ungiminorine is an acetylcholinesterase inhibitor isolated from Narcissus. == References ==
|
{
"page_id": 40632497,
"source": null,
"title": "Ungiminorine"
}
|
In molecular biology mir-5 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. mir-5 has been implicated in regulation of VEGF in an experiment where a plasmid containing a cluster of mir-5, mir-10 and mir-7 was shown to down-regulate VEGF by 75%. mir-5 in chicken has been implicated in targeting genes involved in metabolism. == See also == MicroRNA == External links == == External links == Page for mir-5 microRNA precursor family at Rfam
|
{
"page_id": 36372665,
"source": null,
"title": "Mir-5 microRNA precursor family"
}
|
Annihilation is a 2014 novel by Jeff VanderMeer. It is the first entry in VanderMeer's Southern Reach Series and follows a team of four women (a biologist, an anthropologist, a psychologist, and a surveyor) who set out into an area known as Area X, which is abandoned and cut off from the rest of civilization; they are the twelfth expedition, with all previous expeditions falling apart due to disappearances, suicides, aggressive cancers, and mental trauma. Annihilation won the 2014 Nebula Award for Best Novel and the 2014 Shirley Jackson Award for best novel. A film loosely based on the novel was released by Paramount Pictures in 2018. == Background == VanderMeer's inspiration for Annihilation and the Southern Reach Series came from a 14-mile (23 km) hike through St. Marks National Wildlife Refuge in Florida. The 2010 Deepwater Horizon disaster was also an inspiration; as oil gushed into the Gulf of Mexico, he began reading reports suggesting the broken well might not be capped for decades. Many of the animals and vegetation that VanderMeer saw on this hike over the 17 years before he wrote the book are featured in it. He has said that someday he hopes to do a "Weird Nature" anthology as well. In March 2014, as part of a piece on VanderMeer and Annihilation, he visited the St. Marks Lighthouse that inspired one of the settings in Annihilation. == Plot summary == Four armed and unnamed women—a biologist, an anthropologist, a psychologist, and a military-trained surveyor—cross the border into Area X, an unspecified coastal location that has been closed to the public for three decades. They believe that they are the twelfth expedition into Area X. The story is narrated through the field journal of the biologist, who gradually reveals that her husband was part of the
|
{
"page_id": 42008766,
"source": null,
"title": "Annihilation (VanderMeer novel)"
}
|
previous expedition, from which he had returned home unexpectedly without the memory or ability to explain his reappearance. The other members of the eleventh expedition showed up similarly, and her husband and the others all died of cancer a few months later. In Area X during the present moment, the four women come upon an unmapped bunker with a staircase curving deep into the ground, which the biologist feels oddly inclined to think of as a "Tower". Entering, they discover cursive writing that begins "Where lies the strangling fruit that came from the hand of the sinner I shall bring forth the seeds of the dead" and extends down the Tower's stairway wall into a seemingly endless sentence. The biologist is amazed to see that the words bloom out of a fungal material along the wall, which she examines closely, accidentally inhaling some spores. She returns to the surface and notices the psychologist, the team's leader, using specific sayings to trigger hypnosis in the other women, making them more obedient and tranquil. The biologist realizes that she herself must have undergone earlier hypnotic conditioning too, but is now immune—probably an effect of the spores. She remains silent about her realization, suspicious but going along with the team. They return to their base camp and hear an ominous moaning across Area X, which repeats nightly. By the next day, the anthropologist is missing, which the psychologist ascribes to her abandoning the mission. The three others make their way back to the Tower. The psychologist guards the entrance while the surveyor and biologist descend, soon finding the mutilated corpse of the anthropologist, whom they deduce was killed by the unknown entity also responsible for the writing on the wall, which the biologist privately names the "Crawler". This implies that the psychologist lied
|
{
"page_id": 42008766,
"source": null,
"title": "Annihilation (VanderMeer novel)"
}
|
to them and, returning to the top, they find that she has disappeared. The biologist is conscious of a "brightness" growing within herself, which she attributes to the spores, and she leaves to explore a distant lighthouse; the surveyor stays behind to protect their campsite. Inside the lighthouse, the biologist discovers copious bloodstains and a large hidden pile of hundreds of past expeditions' journals, some detailing battles against a monstrous presence from the sea. She pockets an old photograph of a lighthouse keeper and the journal of her late husband. She suddenly finds the psychologist dying next to the lighthouse, having jumped from the top. The psychologist perceives the biologist as a glowing "flame", repeatedly screaming the word "annihilation" in the hopes of hypnotically inducing her to commit suicide, though the biologist remains unaffected. Before dying, the psychologist reveals that Area X's border is slowly expanding every year. Traveling back toward base camp, the biologist senses the nightly moaning creature approaching; she narrowly escapes but is shot twice by the surveyor who, like the psychologist, is terrified of her "glow". Unable to convince the surveyor she is not a threat, the biologist shoots her dead using newly enhanced instincts that have resulted from her "brightening". Miraculously, her own gunshot wounds begin to heal. The biologist analyzes plant and animal samples she has gathered under a microscope, observing nothing strange except that some of them contain human cells. She also reads her husband's journal, which explains that he and a teammate were surprised that they could never find the faraway coastal border of Area X, then returned to the lighthouse to find the rest of their expedition slaughtered. They also witnessed doppelgängers of the whole team (including themselves) walking to the Tower, which caused them to abort the mission. The biologist
|
{
"page_id": 42008766,
"source": null,
"title": "Annihilation (VanderMeer novel)"
}
|
returns to the Tower to confront the Crawler directly, meeting it on the spiral staircase and finding it almost impossible to describe; it is a rapidly shapeshifting entity of blinding lights and shattering noises, which paralyzes the biologist in an agonizing loop of losing and regaining consciousness. It tosses her down the stairs and a fuzzy white door appears before her but, after over an hour of walking towards it, it remains out of reach. She goes back up the stairs, where she is amazed that she can now pass by the Crawler unharmed. Looking back at the Crawler one final time, she sees the face of the lighthouse keeper from the photograph trapped inside its glow. She escapes the Tower, but she resolves to remain inside Area X and follow the coastline to see where it ends, as her husband once tried to do. == Reception == The reviews for Annihilation have been generally positive. According to Book Marks, the book received a "rave" consensus, based on twelve critics: ten "rave", one "positive", and one "mixed". In the May/June 2014 issue of Bookmarks, the book was scored four out of five. The magazine's critical summary reads: "VanderMeer's work finally finds the wide audience it deserves, and none too soon". Jason Sheehan of National Public Radio described the book as page turning and suspenseful, saying, "about three hours later, I looked up again with half the book behind me and wondered how I'd gotten from there to here." Salon.com named it book of the week while GQ Magazine recognized it as one of the top books for the month of February and said that it was "a book about an intelligent, deadly fungus [which] makes for an enthralling read." The Washington Post said that it was "successfully creepy, an old-style
|
{
"page_id": 42008766,
"source": null,
"title": "Annihilation (VanderMeer novel)"
}
|
gothic horror novel set in a not-too-distant future" while The Daily Telegraph said that it "shows signs of being the novel that will allow VanderMeer to break through to a new and larger audience". Entertainment Weekly gave Annihilation a B+ rating. The novel won the 2014 Nebula Award for Best Novel and the 2014 Shirley Jackson Award for best novel. == Film adaptation == In 2014, Paramount Pictures acquired rights to the novel, with writer-director Alex Garland set to adapt the script and direct the film. In May 2015, Natalie Portman entered into talks to star in the film. In November 2015, Jane the Virgin star Gina Rodriguez was in talks to co-star in the film with Portman. In March 2016, it was announced that Oscar Isaac would join the cast of the film. Garland stated to Creative Screenwriting that his adaptation is based solely on the first novel of the original trilogy as it was the only one released at the time. Filming occurred throughout late April 2016 in the South Forest area of Windsor Great Park in England. The film was released on February 23, 2018, receiving positive reviews and grossing $43.1 million. == References == == Further reading == Prendergast, Finola Anne (2017). "Revising Nonhuman Ethics in Jeff VanderMeer's Annihilation". Contemporary Literature. 58 (3): 333–360. doi:10.3368/cl.58.3.333. S2CID 165581423. == External links == Annihilation title listing at the Internet Speculative Fiction Database Annihilation in Goodreads Jeff VanderMeer
|
{
"page_id": 42008766,
"source": null,
"title": "Annihilation (VanderMeer novel)"
}
|
In immunology, clonal deletion is the process of removing T and B lymphocytes from the immune system repertoire. The process of clonal deletion helps prevent recognition and destruction of the self host cells, making it a type of negative selection. Ultimately, clonal deletion plays a role in central tolerance. Clonal deletion can help protect individuals against autoimmunity, which is when an organism produces and immune response on its own cells. It is one of many methods used by the body in immune tolerance. == Discovery == Central tolerance and clonal deletion did not get much attention in the early years of immunology. Frank Macfarlane Burnet was the first to suggest the idea of clonal deletion. A couple key findings helped Burnet's in this discovery. In 1936, Erich Traub demonstrated that when a developing mouse in the uterus is infected with a virus, once it is born it will elicit no antibody response to that same virus. Whereas a mouse that develops normally with no viral introduction during development, will develop an immune response to the same virus when infected after birth. Then in 1945, Ray David Owen observed that non-identical cattle twins were unable to reject blood from one another when the cattle had different blood types. The combination of Traub's evidence and Owens' observations helped Burnet and his partner, Frank Fenner, to propose that 'self' markers for host cells were determined at an embryonic state. Burnet was then able to hypothesize, in 1959, the clonal selection hypothesis. In part of this hypothesis, Burnet stated that an auto-reactive lymphocyte would be terminated before maturation in order to prevent further proliferation. Burnet, and others, would then go on to win the Nobel Prize in 1960 for their contributions to immunological tolerance. Now, clonal deletion has been a broadly discussed topic in
|
{
"page_id": 17301695,
"source": null,
"title": "Clonal deletion"
}
|
immunology and transplantation for the past decades. == Function == There are millions of B and T lymphocytes within the immune system. As a T or B lymphocyte develops, they can rearrange their genome in order to express a unique antigen that will recognize a specific epitope on a pathogen. There is a large diversity of epitopes recognized and, as a result, it is possible for some B and T lymphocytes to develop with the ability to recognize self. In order to prevent this from happening, every T and B lymphocyte that is generated is presented with a self antigen. If the antigen receptor present on the lymphocyte interacts with high affinity to the self antigen, then that lymphocyte is then categorized as 'self-reactive'. These 'self-reactive' lymphocytes will then undergo the process of clonal deletion. This is achieved through apoptosis of the respected cell, ultimately deleting the cell from the immune system. It is important to note that not all lymphocytes expressing high affinity for self-antigen undergo clonal deletion. If autoreactive cells escape clonal deletion, there are mechanisms in the periphery involving T regulatory cells to prevent the host from obtaining an autoimmune disease. However, for both B and T cells in the primary lymphoid organs, clonal deletion is the most common form of negative selection. The process of clonal deletion helps protect the host from autoimmunity. == Location and Mechanism == B and T lymphocytes are tested for self reactivity in the primary lymphoid organs, before entering into the periphery. The site at which this occurs is dependent on the type of lymphocyte. B lymphocytes both develop and mature within the bone marrow. Whereas T lymphocytes develop in the bone marrow and mature later in the thymus, hence the T. The mechanisms of central tolerance are not completely affective,
|
{
"page_id": 17301695,
"source": null,
"title": "Clonal deletion"
}
|
and some autoreactive lymphocytes can find their way into circulation. However, the immune system has secondary defenses within the periphery to protect against this, referred to as peripheral tolerance. === B Lymphocytes === Regulation of auto-reactive B lymphocytes can occur at many different stages during B cell development. The first line of defense occurs within the bone marrow, before the auto-reactive cell can reach circulation. This occurs after the functional B-cell receptor (BCR) is assembled. If the BCR demonstrates a high affinity attraction to self-antigen then clonal deletion can occur at this point. However, some auto-reactive B lymphocytes can slip through this check point and find their way into circulation. If this occurs, then this is when peripheral tolerance come into effect. This is the process of removing auto-reactive cells within circulation after they have fully-matured. Examples of mechanisms used in peripheral tolerance against auto-reactive B lymphocytes include anergy, and antigen receptor desensitization. Like central tolerance, peripheral tolerance is not always fully accurate, leaving the possibility for an auto-reactive lymphocyte to remain in circulation. === T Lymphocytes === The process of removing auto-reactive T lymphocytes occurs in the thymus. The thymus contains two zones: the outer region called the thymic cortex, and the inner region called the thymic medulla. Within these regions T lymphocytes will undergo a series of positive or negative selection. ==== Thymic cortex ==== T lymphocytes first undergo positive selection within the thymic cortex. Here T lymphocytes are tested to see if they can recognize self major histocompatibility complex class I or II (MHC I/II). If the T lymphocyte can recognize self MHC I/II then it will continue maturation and move into the thymic medulla. If the T lymphocyte cannot recognize self (MHC I/II) then it will undergo neglect or apoptosis. Thymic dendritic cells and macrophages appear
|
{
"page_id": 17301695,
"source": null,
"title": "Clonal deletion"
}
|
to be responsible for the apoptotic signals sent to autoreactive T cells in the thymic cortex. ==== Thymic medulla ==== T cells also have the opportunity to undergo clonal deletion within the thymic medulla. Here the T lymphocytes undergo negative selection. At this point they encounter MHC I/II complexes presenting self antigens. If the T lymphocyte interacts with high affinity to the complex presenting self antigen, then that lymphocyte will undergo apoptosis or Treg differentiation. Similarly to B lymphocyte regulation, T lymphocytes have the potential to leave the thymus and still be autoreactive. However, the immune system has evolved to combat this though peripheral tolerance. Mechanisms of peripheral tolerance against auto-reactive T lymphocytes include clonal arrest, clonal anergy, and clonal editing after. == Complete vs. incomplete clonal deletion == Complete clonal deletion results in apoptosis of all B and T lymphocytes expressing high affinity for self antigen. Incomplete clonal deletion results in apoptosis of most autoreactive B and T lymphocytes. Complete clonal deletion can lead to opportunities for molecular mimicry, which has adverse effects for the host. Therefore, incomplete clonal deletion allows for a balance between the host's ability to recognize foreign antigens and self antigens. == Methods of exploitation == === Molecular mimicry === Clonal deletion provides an incentive for microorganisms to develop epitopes similar to proteins found within the host. Because most autoresponsive cells undergo clonal deletion, this allows microorganisms with epitopes similar to host antigen to escape recognition and detection by T and B lymphocytes. However, if detected, this can lead to an autoimmune response because of the similarity of the epitopes on the microorganism and host antigen. Examples of this are seen in Streptococcus pyogenes and Borrelia burgdorferi. It is possible, but uncommon for molecular mimicry to lead to an autoimmune disease. === Superantigens === Superantigens
|
{
"page_id": 17301695,
"source": null,
"title": "Clonal deletion"
}
|
are composed of viral or bacterial proteins and can hijack the clonal deletion process when expressed in the thymus because they resemble the T-cell receptor (TCR) interaction with self MHC/peptides. Thus, through this process, superantigens can effectively prevent maturation of cognate T cells. == References == == External links == Clonal+deletion at the U.S. National Library of Medicine Medical Subject Headings (MeSH)
|
{
"page_id": 17301695,
"source": null,
"title": "Clonal deletion"
}
|
5α-Reductases, also known as 3-oxo-5α-steroid 4-dehydrogenases, are enzymes involved in steroid metabolism. They participate in three metabolic pathways: bile acid biosynthesis, androgen and estrogen metabolism. There are three isozymes of 5α-reductase encoded by the genes SRD5A1, SRD5A2, and SRD5A3. 5α-Reductases catalyze the following generalized chemical reaction: a 3-oxo-5α-steroid + acceptor ⇌ a 3-oxo-Δ4-steroid + reduced acceptor Where a 3-oxo-5α-steroid and acceptor are substrates, and a corresponding 3-oxo-Δ4-steroid and the reduced acceptor are products. An instance of this generalized reaction that 5α-reductase type 2 catalyzes is: dihydrotestosterone + NADP+ ⇌ {\displaystyle \rightleftharpoons } testosterone + NADPH + H+ where dihydrotestosterone is the 3-oxo-5α-steroid, NADP+ is the acceptor and testosterone is the 3-oxo-Δ4-steroid and NADPH the reduced acceptor. == Production and activity == The enzyme is produced in many tissues in both males and females, in the reproductive tract, testes and ovaries, skin, seminal vesicles, prostate, epididymis and many organs, including the nervous system. There are three isoenzymes of 5α-reductase: steroid 5α-reductase 1, 2, and 3 (SRD5A1, SRD5A2 and SRD5A3). 5α-Reductases act on 3-oxo (3-keto), Δ4,5 C19/C21 steroids as its substrates; "3-keto" refers to the double bond of the third carbon to oxygen. Carbons 4 and 5 also have a double bond, represented by 'Δ4,5'. The reaction involves a stereospecific and permanent break of the Δ4,5 with the help of NADPH as a cofactor. A hydride anion (H−) is also placed on the α face at the fifth carbon, and a proton on the β face at carbon 4. == Distribution with age == 5α-R1 is expressed in fetal scalp and nongenital skin of the back, anywhere from 5 to 50 times less than in the adult. 5α-R2 is expressed in fetal prostates similar to adults. 5α-R1 is expressed mainly in the epithelium and 5α-R2 the stroma of the fetal prostate. Scientists
|
{
"page_id": 1245377,
"source": null,
"title": "5α-Reductase"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.