source stringclasses 470 values | url stringlengths 49 167 | file_type stringclasses 1 value | chunk stringlengths 1 512 | chunk_id stringlengths 5 9 |
|---|---|---|---|---|
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | This is the configuration class to store the configuration of an [`PatchTSTModel`]. It is used to instantiate an
PatchTST model according to the specified arguments, defining the model architecture.
[ibm/patchtst](https://huggingface.co/ibm/patchtst) architecture.
Configuration objects inherit from [`PretrainedConfig`] can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
num_input_channels (`int`, *optional*, defaults to 1): | 419_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | documentation from [`PretrainedConfig`] for more information.
Args:
num_input_channels (`int`, *optional*, defaults to 1):
The size of the target variable which by default is 1 for univariate targets. Would be > 1 in case of
multivariate targets.
context_length (`int`, *optional*, defaults to 32):
The context length of the input sequence.
distribution_output (`str`, *optional*, defaults to `"student_t"`): | 419_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | The context length of the input sequence.
distribution_output (`str`, *optional*, defaults to `"student_t"`):
The distribution emission head for the model when loss is "nll". Could be either "student_t", "normal" or
"negative_binomial".
loss (`str`, *optional*, defaults to `"mse"`):
The loss function for the model corresponding to the `distribution_output` head. For parametric
distributions it is the negative log likelihood ("nll") and for point estimates it is the mean squared
error "mse". | 419_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | distributions it is the negative log likelihood ("nll") and for point estimates it is the mean squared
error "mse".
patch_length (`int`, *optional*, defaults to 1):
Define the patch length of the patchification process.
patch_stride (`int`, *optional*, defaults to 1):
Define the stride of the patchification process.
num_hidden_layers (`int`, *optional*, defaults to 3):
Number of hidden layers.
d_model (`int`, *optional*, defaults to 128):
Dimensionality of the transformer layers. | 419_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | Number of hidden layers.
d_model (`int`, *optional*, defaults to 128):
Dimensionality of the transformer layers.
num_attention_heads (`int`, *optional*, defaults to 4):
Number of attention heads for each attention layer in the Transformer encoder.
share_embedding (`bool`, *optional*, defaults to `True`):
Sharing the input embedding across all channels.
channel_attention (`bool`, *optional*, defaults to `False`):
Activate channel attention block in the Transformer to allow channels to attend each other. | 419_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | Activate channel attention block in the Transformer to allow channels to attend each other.
ffn_dim (`int`, *optional*, defaults to 512):
Dimension of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
norm_type (`str` , *optional*, defaults to `"batchnorm"`):
Normalization at each Transformer layer. Can be `"batchnorm"` or `"layernorm"`.
norm_eps (`float`, *optional*, defaults to 1e-05):
A value added to the denominator for numerical stability of normalization. | 419_4_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | norm_eps (`float`, *optional*, defaults to 1e-05):
A value added to the denominator for numerical stability of normalization.
attention_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for the attention probabilities.
positional_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability in the positional embedding layer.
path_dropout (`float`, *optional*, defaults to 0.0):
The dropout path in the residual block.
ff_dropout (`float`, *optional*, defaults to 0.0): | 419_4_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | The dropout path in the residual block.
ff_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability used between the two layers of the feed-forward networks.
bias (`bool`, *optional*, defaults to `True`):
Whether to add bias in the feed-forward networks.
activation_function (`str`, *optional*, defaults to `"gelu"`):
The non-linear activation function (string) in the Transformer.`"gelu"` and `"relu"` are supported.
pre_norm (`bool`, *optional*, defaults to `True`): | 419_4_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | pre_norm (`bool`, *optional*, defaults to `True`):
Normalization is applied before self-attention if pre_norm is set to `True`. Otherwise, normalization is
applied after residual block.
positional_encoding_type (`str`, *optional*, defaults to `"sincos"`):
Positional encodings. Options `"random"` and `"sincos"` are supported.
use_cls_token (`bool`, *optional*, defaults to `False`):
Whether cls token is used.
init_std (`float`, *optional*, defaults to 0.02): | 419_4_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | Whether cls token is used.
init_std (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated normal weight initialization distribution.
share_projection (`bool`, *optional*, defaults to `True`):
Sharing the projection layer across different channels in the forecast head.
scaling (`Union`, *optional*, defaults to `"std"`):
Whether to scale the input targets via "mean" scaler, "std" scaler or no scaler if `None`. If `True`, the
scaler is set to "mean". | 419_4_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | scaler is set to "mean".
do_mask_input (`bool`, *optional*):
Apply masking during the pretraining.
mask_type (`str`, *optional*, defaults to `"random"`):
Masking type. Only `"random"` and `"forecast"` are currently supported.
random_mask_ratio (`float`, *optional*, defaults to 0.5):
Masking ratio applied to mask the input data during random pretraining.
num_forecast_mask_patches (`int` or `list`, *optional*, defaults to `[2]`): | 419_4_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | num_forecast_mask_patches (`int` or `list`, *optional*, defaults to `[2]`):
Number of patches to be masked at the end of each batch sample. If it is an integer,
all the samples in the batch will have the same number of masked patches. If it is a list,
samples in the batch will be randomly masked by numbers defined in the list. This argument is only used
for forecast pretraining.
channel_consistent_masking (`bool`, *optional*, defaults to `False`): | 419_4_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | for forecast pretraining.
channel_consistent_masking (`bool`, *optional*, defaults to `False`):
If channel consistent masking is True, all the channels will have the same masking pattern.
unmasked_channel_indices (`list`, *optional*):
Indices of channels that are not masked during pretraining. Values in the list are number between 1 and
`num_input_channels`
mask_value (`int`, *optional*, defaults to 0):
Values in the masked patches will be filled by `mask_value`. | 419_4_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | mask_value (`int`, *optional*, defaults to 0):
Values in the masked patches will be filled by `mask_value`.
pooling_type (`str`, *optional*, defaults to `"mean"`):
Pooling of the embedding. `"mean"`, `"max"` and `None` are supported.
head_dropout (`float`, *optional*, defaults to 0.0):
The dropout probability for head.
prediction_length (`int`, *optional*, defaults to 24):
The prediction horizon that the model will output.
num_targets (`int`, *optional*, defaults to 1): | 419_4_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | The prediction horizon that the model will output.
num_targets (`int`, *optional*, defaults to 1):
Number of targets for regression and classification tasks. For classification, it is the number of
classes.
output_range (`list`, *optional*):
Output range for regression task. The range of output values can be set to enforce the model to produce
values within a range.
num_parallel_samples (`int`, *optional*, defaults to 100):
The number of samples is generated in parallel for probabilistic prediction. | 419_4_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | The number of samples is generated in parallel for probabilistic prediction.
```python
>>> from transformers import PatchTSTConfig, PatchTSTModel | 419_4_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstconfig | .md | >>> # Initializing an PatchTST configuration with 12 time steps for prediction
>>> configuration = PatchTSTConfig(prediction_length=12)
>>> # Randomly initializing a model (with random weights) from the configuration
>>> model = PatchTSTModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 419_4_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstmodel | .md | The bare PatchTST Model outputting raw hidden-states without any specific head.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | 419_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstmodel | .md | etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`PatchTSTConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the | 419_5_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstmodel | .md | load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 419_5_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstforprediction | .md | The PatchTST for prediction model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior. | 419_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstforprediction | .md | and behavior.
Parameters:
config ([`PatchTSTConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 419_6_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstforclassification | .md | The PatchTST for classification model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | 419_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstforclassification | .md | Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior.
Parameters:
config ([`PatchTSTConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 419_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstforpretraining | .md | The PatchTST for pretrain model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior. | 419_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstforpretraining | .md | and behavior.
Parameters:
config ([`PatchTSTConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 419_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstforregression | .md | The PatchTST for regression model.
This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
etc.)
This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
and behavior. | 419_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/patchtst.md | https://huggingface.co/docs/transformers/en/model_doc/patchtst/#patchtstforregression | .md | and behavior.
Parameters:
config ([`PatchTSTConfig`]):
Model configuration class with all the parameters of the model. Initializing with a config file does not
load the weights associated with the model, only the configuration. Check out the
[`~PreTrainedModel.from_pretrained`] method to load the model weights.
Methods: forward | 419_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/ | .md | <!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | 420_0_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/ | .md | an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | 420_0_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer | .md | <div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=longformer">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-longformer-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/longformer-base-4096-finetuned-squadv1">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div> | 420_1_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#overview | .md | The Longformer model was presented in [Longformer: The Long-Document Transformer](https://arxiv.org/pdf/2004.05150.pdf) by Iz Beltagy, Matthew E. Peters, Arman Cohan.
The abstract from the paper is the following:
*Transformer-based models are unable to process long sequences due to their self-attention operation, which scales
quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention | 420_2_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#overview | .md | quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention
mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or
longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local
windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we | 420_2_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#overview | .md | windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we
evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In
contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our
pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on
WikiHop and TriviaQA.* | 420_2_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#overview | .md | WikiHop and TriviaQA.*
This model was contributed by [beltagy](https://huggingface.co/beltagy). The Authors' code can be found [here](https://github.com/allenai/longformer). | 420_2_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#usage-tips | .md | - Since the Longformer is based on RoBERTa, it doesn't have `token_type_ids`. You don't need to indicate which
token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or
`</s>`). | 420_3_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#usage-tips | .md | `</s>`).
- A transformer model replacing the attention matrices by sparse matrices to go faster. Often, the local context (e.g., what are the two tokens left and right?) is enough to take action for a given token. Some preselected input tokens are still given global attention, but the attention matrix has way less parameters, resulting in a speed-up. See the local attention section for more information. | 420_3_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-self-attention | .md | Longformer self attention employs self attention on both a "local" context and a "global" context. Most tokens only
attend "locally" to each other meaning that each token attends to its \\(\frac{1}{2} w\\) previous tokens and
\\(\frac{1}{2} w\\) succeeding tokens with \\(w\\) being the window length as defined in
`config.attention_window`. Note that `config.attention_window` can be of type `List` to define a | 420_4_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-self-attention | .md | `config.attention_window`. Note that `config.attention_window` can be of type `List` to define a
different \\(w\\) for each layer. A selected few tokens attend "globally" to all other tokens, as it is
conventionally done for all tokens in `BertSelfAttention`.
Note that "locally" and "globally" attending tokens are projected by different query, key and value matrices. Also note
that every "locally" attending token not only attends to tokens within its window \\(w\\), but also to all "globally" | 420_4_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-self-attention | .md | that every "locally" attending token not only attends to tokens within its window \\(w\\), but also to all "globally"
attending tokens so that global attention is *symmetric*.
The user can define which tokens attend "locally" and which tokens attend "globally" by setting the tensor
`global_attention_mask` at run-time appropriately. All Longformer models employ the following logic for
`global_attention_mask`:
- 0: the token attends "locally",
- 1: the token attends "globally". | 420_4_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-self-attention | .md | `global_attention_mask`:
- 0: the token attends "locally",
- 1: the token attends "globally".
For more information please also refer to [`~LongformerModel.forward`] method.
Using Longformer self attention, the memory and time complexity of the query-key matmul operation, which usually
represents the memory and time bottleneck, can be reduced from \\(\mathcal{O}(n_s \times n_s)\\) to
\\(\mathcal{O}(n_s \times w)\\), with \\(n_s\\) being the sequence length and \\(w\\) being the average window | 420_4_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-self-attention | .md | \\(\mathcal{O}(n_s \times w)\\), with \\(n_s\\) being the sequence length and \\(w\\) being the average window
size. It is assumed that the number of "globally" attending tokens is insignificant as compared to the number of
"locally" attending tokens.
For more information, please refer to the official [paper](https://arxiv.org/pdf/2004.05150.pdf). | 420_4_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#training | .md | [`LongformerForMaskedLM`] is trained the exact same way [`RobertaForMaskedLM`] is
trained and should be used as follows:
```python
input_ids = tokenizer.encode("This is a sentence from [MASK] training data", return_tensors="pt")
mlm_labels = tokenizer.encode("This is a sentence from the training data", return_tensors="pt")
loss = model(input_ids, labels=input_ids, masked_lm_labels=mlm_labels)[0]
``` | 420_5_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#resources | .md | - [Text classification task guide](../tasks/sequence_classification)
- [Token classification task guide](../tasks/token_classification)
- [Question answering task guide](../tasks/question_answering)
- [Masked language modeling task guide](../tasks/masked_language_modeling)
- [Multiple choice task guide](../tasks/multiple_choice) | 420_6_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerconfig | .md | This is the configuration class to store the configuration of a [`LongformerModel`] or a [`TFLongformerModel`]. It
is used to instantiate a Longformer model according to the specified arguments, defining the model architecture.
This is the configuration class to store the configuration of a [`LongformerModel`]. It is used to instantiate an
Longformer model according to the specified arguments, defining the model architecture. Instantiating a | 420_7_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerconfig | .md | Longformer model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the LongFormer
[allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) architecture with a sequence
length 4,096.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args: | 420_7_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerconfig | .md | documentation from [`PretrainedConfig`] for more information.
Args:
vocab_size (`int`, *optional*, defaults to 30522):
Vocabulary size of the Longformer model. Defines the number of different tokens that can be represented by
the `inputs_ids` passed when calling [`LongformerModel`] or [`TFLongformerModel`].
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12): | 420_7_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerconfig | .md | Dimensionality of the encoder layers and the pooler layer.
num_hidden_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder. | 420_7_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerconfig | .md | Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"silu"` and `"gelu_new"` are supported.
hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. | 420_7_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerconfig | .md | The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
The dropout ratio for the attention probabilities.
max_position_embeddings (`int`, *optional*, defaults to 512):
The maximum sequence length that this model might ever be used with. Typically set this to something large
just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2): | 420_7_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerconfig | .md | just in case (e.g., 512 or 1024 or 2048).
type_vocab_size (`int`, *optional*, defaults to 2):
The vocabulary size of the `token_type_ids` passed when calling [`LongformerModel`] or
[`TFLongformerModel`].
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers. | 420_7_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerconfig | .md | layer_norm_eps (`float`, *optional*, defaults to 1e-12):
The epsilon used by the layer normalization layers.
attention_window (`int` or `List[int]`, *optional*, defaults to 512):
Size of an attention window around each token. If an `int`, use the same size for all layers. To specify a
different window size for each layer, use a `List[int]` where `len(attention_window) == num_hidden_layers`.
Example:
```python
>>> from transformers import LongformerConfig, LongformerModel | 420_7_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformerconfig | .md | >>> # Initializing a Longformer configuration
>>> configuration = LongformerConfig()
>>> # Initializing a model from the configuration
>>> model = LongformerModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
``` | 420_7_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizer | .md | Constructs a Longformer tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import LongformerTokenizer | 420_8_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizer | .md | >>> tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2] | 420_8_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizer | .md | >>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip> | 420_8_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizer | .md | When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one).
</Tip>
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See | 420_8_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizer | .md | errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of | 420_8_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizer | .md | <Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`): | 420_8_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizer | .md | The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`): | 420_8_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizer | .md | token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 420_8_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizer | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 420_8_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizer | .md | modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Longformer tokenizer detect beginning of words by the preceding space). | 420_8_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizerfast | .md | Construct a "fast" Longformer tokenizer (backed by HuggingFace's *tokenizers* library), derived from the GPT-2
tokenizer, using byte-level Byte-Pair-Encoding.
This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will
be encoded differently whether it is at the beginning of the sentence (without space) or not:
```python
>>> from transformers import LongformerTokenizerFast | 420_9_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizerfast | .md | >>> tokenizer = LongformerTokenizerFast.from_pretrained("allenai/longformer-base-4096")
>>> tokenizer("Hello world")["input_ids"]
[0, 31414, 232, 2] | 420_9_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizerfast | .md | >>> tokenizer(" Hello world")["input_ids"]
[0, 20920, 232, 2]
```
You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you
call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance.
<Tip>
When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip> | 420_9_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizerfast | .md | When used with `is_split_into_words=True`, this tokenizer needs to be instantiated with `add_prefix_space=True`.
</Tip>
This tokenizer inherits from [`PreTrainedTokenizerFast`] which contains most of the main methods. Users should
refer to this superclass for more information regarding those methods.
Args:
vocab_file (`str`):
Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`): | 420_9_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizerfast | .md | Path to the vocabulary file.
merges_file (`str`):
Path to the merges file.
errors (`str`, *optional*, defaults to `"replace"`):
Paradigm to follow when decoding bytes to UTF-8. See
[bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information.
bos_token (`str`, *optional*, defaults to `"<s>"`):
The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip> | 420_9_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizerfast | .md | The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the beginning of
sequence. The token used is the `cls_token`.
</Tip>
eos_token (`str`, *optional*, defaults to `"</s>"`):
The end of sequence token.
<Tip>
When building a sequence using special tokens, this is not the token that is used for the end of sequence.
The token used is the `sep_token`. | 420_9_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizerfast | .md | The token used is the `sep_token`.
</Tip>
sep_token (`str`, *optional*, defaults to `"</s>"`):
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
sequence classification or for a text and a question for question answering. It is also used as the last
token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`): | 420_9_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizerfast | .md | token of a sequence built with special tokens.
cls_token (`str`, *optional*, defaults to `"<s>"`):
The classifier token which is used when doing sequence classification (classification of the whole sequence
instead of per-token classification). It is the first token of the sequence when built with special tokens.
unk_token (`str`, *optional*, defaults to `"<unk>"`):
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead. | 420_9_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizerfast | .md | The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
token instead.
pad_token (`str`, *optional*, defaults to `"<pad>"`):
The token used for padding, for example when batching sequences of different lengths.
mask_token (`str`, *optional*, defaults to `"<mask>"`):
The token used for masking values. This is the token used when training this model with masked language
modeling. This is the token which the model will try to predict. | 420_9_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformertokenizerfast | .md | modeling. This is the token which the model will try to predict.
add_prefix_space (`bool`, *optional*, defaults to `False`):
Whether or not to add an initial space to the input. This allows to treat the leading word just as any
other word. (Longformer tokenizer detect beginning of words by the preceding space).
trim_offsets (`bool`, *optional*, defaults to `True`):
Whether the post processing step should trim offsets to avoid including whitespaces. | 420_9_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | models.longformer.modeling_longformer.LongformerBaseModelOutput
Base class for Longformer's outputs, with potential hidden states, local and global attentions.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): | 420_10_0 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + | 420_10_1 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x +
attention_window + 1)`, where `x` is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first `x` values) and to every token in the attention window (remaining `attention_window | 420_10_2 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | global attention (first `x` values) and to every token in the attention window (remaining `attention_window
+ 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the
remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a
token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding | 420_10_3 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding
(succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens.
If the attention window contains a token with global attention, the attention weight at the corresponding
index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global | 420_10_4 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global
attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be
accessed from `global_attentions`.
global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, | 420_10_5 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`,
where `x` is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
models.longformer.modeling_longformer.LongformerBaseModelOutput | 420_10_6 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | in the sequence.
models.longformer.modeling_longformer.LongformerBaseModelOutput
Base class for Longformer's outputs, with potential hidden states, local and global attentions.
Args:
last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
Sequence of hidden-states at the output of the last layer of the model.
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): | 420_10_7 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x + | 420_10_8 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x +
attention_window + 1)`, where `x` is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first `x` values) and to every token in the attention window (remaining `attention_window | 420_10_9 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | global attention (first `x` values) and to every token in the attention window (remaining `attention_window
+ 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the
remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a
token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding | 420_10_10 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding
(succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens.
If the attention window contains a token with global attention, the attention weight at the corresponding
index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global | 420_10_11 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global
attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be
accessed from `global_attentions`.
global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`, | 420_10_12 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`,
where `x` is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
WithPooling
models.longformer.modeling_longformer.LongformerMaskedLMOutput | 420_10_13 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | in the sequence.
WithPooling
models.longformer.modeling_longformer.LongformerMaskedLMOutput
Base class for masked language models outputs.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Masked language modeling (MLM) loss.
logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`):
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). | 420_10_14 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs. | 420_10_15 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x +
attention_window + 1)`, where `x` is the number of tokens with global attention mask. | 420_10_16 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | attention_window + 1)`, where `x` is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first `x` values) and to every token in the attention window (remaining `attention_window
+ 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the | 420_10_17 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the
remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a
token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding
(succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. | 420_10_18 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | (succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens.
If the attention window contains a token with global attention, the attention weight at the corresponding
index is set to 0; the value should be accessed from the first `x` attention weights. If a token has global
attention, the attention weights to all other tokens in `attentions` is set to 0, the values should be
accessed from `global_attentions`. | 420_10_19 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | accessed from `global_attentions`.
global_attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x)`,
where `x` is the number of tokens with global attention mask.
Global attentions weights after the attention softmax, used to compute the weighted average in the | 420_10_20 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Global attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token with global attention to every token
in the sequence.
models.longformer.modeling_longformer.LongformerQuestionAnsweringModelOutput
Base class for outputs of question answering Longformer models.
Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided): | 420_10_21 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Args:
loss (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided):
Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.
start_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Span-start scores (before SoftMax).
end_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Span-end scores (before SoftMax). | 420_10_22 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | end_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Span-end scores (before SoftMax).
hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of
shape `(batch_size, sequence_length, hidden_size)`.
Hidden-states of the model at the output of each layer plus the initial embedding outputs. | 420_10_23 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`):
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, x +
attention_window + 1)`, where `x` is the number of tokens with global attention mask. | 420_10_24 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | attention_window + 1)`, where `x` is the number of tokens with global attention mask.
Local attentions weights after the attention softmax, used to compute the weighted average in the
self-attention heads. Those are the attention weights from every token in the sequence to every token with
global attention (first `x` values) and to every token in the attention window (remaining `attention_window
+ 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the | 420_10_25 |
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/longformer.md | https://huggingface.co/docs/transformers/en/model_doc/longformer/#longformer-specific-outputs | .md | + 1` values). Note that the first `x` values refer to tokens with fixed positions in the text, but the
remaining `attention_window + 1` values refer to tokens with relative positions: the attention weight of a
token to itself is located at index `x + attention_window / 2` and the `attention_window / 2` preceding
(succeeding) values are the attention weights to the `attention_window / 2` preceding (succeeding) tokens. | 420_10_26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.