text stringlengths 2 11.8k |
|---|
XLNetModel
[[autodoc]] XLNetModel
- forward
XLNetLMHeadModel
[[autodoc]] XLNetLMHeadModel
- forward
XLNetForSequenceClassification
[[autodoc]] XLNetForSequenceClassification
- forward
XLNetForMultipleChoice
[[autodoc]] XLNetForMultipleChoice
- forward
XLNetForTokenClassification
[[autodoc]] XLNetForTo... |
TFXLNetModel
[[autodoc]] TFXLNetModel
- call
TFXLNetLMHeadModel
[[autodoc]] TFXLNetLMHeadModel
- call
TFXLNetForSequenceClassification
[[autodoc]] TFXLNetForSequenceClassification
- call
TFLNetForMultipleChoice
[[autodoc]] TFXLNetForMultipleChoice
- call
TFXLNetForTokenClassification
[[autodoc]] TFXLNet... |
ESM
Overview
This page provides code and pre-trained weights for Transformer protein language models from Meta AI's Fundamental
AI Research Team, providing the state-of-the-art ESMFold and ESM-2, and the previously released ESM-1b and ESM-1v.
Transformer protein language models were introduced in the paper Biological... |
ESM models are trained with a masked language modeling (MLM) objective.
The HuggingFace port of ESMFold uses portions of the openfold library. The openfold library is licensed under the Apache License 2.0.
Resources
Text classification task guide
Token classification task guide
Masked language modeling task guide |
Resources
Text classification task guide
Token classification task guide
Masked language modeling task guide
EsmConfig
[[autodoc]] EsmConfig
- all
EsmTokenizer
[[autodoc]] EsmTokenizer
- build_inputs_with_special_tokens
- get_special_tokens_mask
- create_token_type_ids_from_sequences
- save_vocabu... |
EsmModel
[[autodoc]] EsmModel
- forward
EsmForMaskedLM
[[autodoc]] EsmForMaskedLM
- forward
EsmForSequenceClassification
[[autodoc]] EsmForSequenceClassification
- forward
EsmForTokenClassification
[[autodoc]] EsmForTokenClassification
- forward
EsmForProteinFolding
[[autodoc]] EsmForProteinFolding
... |
TFEsmModel
[[autodoc]] TFEsmModel
- call
TFEsmForMaskedLM
[[autodoc]] TFEsmForMaskedLM
- call
TFEsmForSequenceClassification
[[autodoc]] TFEsmForSequenceClassification
- call
TFEsmForTokenClassification
[[autodoc]] TFEsmForTokenClassification
- call |
Pyramid Vision Transformer (PVT)
Overview
The PVT model was proposed in
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
by Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, Ling Shao. The PVT is a type of
vision transformer that utili... |
PVTv1 on ImageNet-1K |
| Model variant |Size |Acc@1|Params (M)|
|--------------------|:-------:|:-------:|:------------:|
| PVT-Tiny | 224 | 75.1 | 13.2 |
| PVT-Small | 224 | 79.8 | 24.5 |
| PVT-Medium | 224 | 81.2 | 44.2 |
| PVT-Large | 224 | 81.7 | ... |
Reformer |
Overview
The Reformer model was proposed in the paper Reformer: The Efficient Transformer by Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya.
The abstract from the paper is the following:
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can
be prohibitiv... |
Reformer does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035.
Use Axial position encoding (see below for more details). It’s a mechanism to avoid having a huge positional encoding matrix (when the sequence length is very big) by factorizing it into smaller matrices.
Replace traditional ... |
Axial Positional Encodings
Axial Positional Encodings were first implemented in Google's trax library
and developed by the authors of this model's paper. In models that are treating very long input sequences, the
conventional position id encodings store an embeddings vector of size \(d\) being the config.hidden_size ... |
Text classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide |
ReformerConfig
[[autodoc]] ReformerConfig
ReformerTokenizer
[[autodoc]] ReformerTokenizer
- save_vocabulary
ReformerTokenizerFast
[[autodoc]] ReformerTokenizerFast
ReformerModel
[[autodoc]] ReformerModel
- forward
ReformerModelWithLMHead
[[autodoc]] ReformerModelWithLMHead
- forward
ReformerForMaskedLM
[[... |
CamemBERT
Overview
The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by
Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la
Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook's RoBERTa model released in 2019. It is a m... |
This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well
as the information relative to the inputs and outputs.
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked languag... |
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
CamembertConfig
[[autodoc]] CamembertConfig
CamembertTokenizer
[[autodoc]] CamembertTokenizer
- build_inputs_with_special_t... |
CamembertModel
[[autodoc]] CamembertModel
CamembertForCausalLM
[[autodoc]] CamembertForCausalLM
CamembertForMaskedLM
[[autodoc]] CamembertForMaskedLM
CamembertForSequenceClassification
[[autodoc]] CamembertForSequenceClassification
CamembertForMultipleChoice
[[autodoc]] CamembertForMultipleChoice
CamembertForTokenClass... |
TFCamembertModel
[[autodoc]] TFCamembertModel
TFCamembertForCasualLM
[[autodoc]] TFCamembertForCausalLM
TFCamembertForMaskedLM
[[autodoc]] TFCamembertForMaskedLM
TFCamembertForSequenceClassification
[[autodoc]] TFCamembertForSequenceClassification
TFCamembertForMultipleChoice
[[autodoc]] TFCamembertForMultipleChoice
TF... |
GPT-NeoX-Japanese
Overview
We introduce GPT-NeoX-Japanese, which is an autoregressive language model for Japanese, trained on top of https://github.com/EleutherAI/gpt-neox.
Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts.
To address this distin... |
from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer
model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b")
tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b")
prompt = "人とAIが協調するためには、"
input_ids = tokenizer(prompt, return_tensors="... |
Resources
Causal language modeling task guide
GPTNeoXJapaneseConfig
[[autodoc]] GPTNeoXJapaneseConfig
GPTNeoXJapaneseTokenizer
[[autodoc]] GPTNeoXJapaneseTokenizer
GPTNeoXJapaneseModel
[[autodoc]] GPTNeoXJapaneseModel
- forward
GPTNeoXJapaneseForCausalLM
[[autodoc]] GPTNeoXJapaneseForCausalLM
- forward |
MRA
Overview
The MRA model was proposed in Multi Resolution Analysis (MRA) for Approximate Self-Attention by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh.
The abstract from the paper is the following:
Transformers have emerged as a preferred model for many tasks in natural language processin... |
Pop2Piano |
Overview
The Pop2Piano model was proposed in Pop2Piano : Pop Audio-based Piano Cover Generation by Jongho Choi and Kyogu Lee.
Piano covers of pop music are widely enjoyed, but generating them from music is not a trivial task. It requires great
expertise with playing piano as well as knowing different characteristics... |
To use Pop2Piano, you will need to install the 🤗 Transformers library, as well as the following third party modules: |
pip install pretty-midi==0.2.9 essentia==2.1b6.dev1034 librosa scipy
Please note that you may need to restart your runtime after installation.
Pop2Piano is an Encoder-Decoder based model like T5.
Pop2Piano can be used to generate midi-audio files for a given audio sequence.
Choosing different composers in Pop2PianoFo... |
Examples
Example using HuggingFace Dataset:
thon |
from datasets import load_dataset
from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor
model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano")
processor = Pop2PianoProcessor.from_pretrained("sweetcocoa/pop2piano")
ds = load_dataset("sweetcocoa/pop2piano_ci", split="... |
Example using your own audio file:
thon |
import librosa
from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor
audio, sr = librosa.load("", sr=44100) # feel free to change the sr to a suitable value.
model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano")
processor = Pop2PianoProcessor.from_pretrained("swee... |
Example of processing multiple audio files in batch:
thon |
import librosa
from transformers import Pop2PianoForConditionalGeneration, Pop2PianoProcessor
feel free to change the sr to a suitable value.
audio1, sr1 = librosa.load("", sr=44100)
audio2, sr2 = librosa.load("", sr=44100)
model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcocoa/pop2piano")
processor = ... |
Example of processing multiple audio files in batch (Using Pop2PianoFeatureExtractor and Pop2PianoTokenizer):
thon |
import librosa
from transformers import Pop2PianoForConditionalGeneration, Pop2PianoFeatureExtractor, Pop2PianoTokenizer
feel free to change the sr to a suitable value.
audio1, sr1 = librosa.load("", sr=44100)
audio2, sr2 = librosa.load("", sr=44100)
model = Pop2PianoForConditionalGeneration.from_pretrained("sweetcoc... |
Pop2PianoConfig
[[autodoc]] Pop2PianoConfig
Pop2PianoFeatureExtractor
[[autodoc]] Pop2PianoFeatureExtractor
- call
Pop2PianoForConditionalGeneration
[[autodoc]] Pop2PianoForConditionalGeneration
- forward
- generate
Pop2PianoTokenizer
[[autodoc]] Pop2PianoTokenizer
- call
Pop2PianoProcessor
[[autodoc]] ... |
ConvNeXt V2
Overview
The ConvNeXt V2 model was proposed in ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
ConvNeXt V2 is a pure convolutional model (ConvNet), inspired by the design of Vision Tra... |
ConvNeXt V2 architecture. Taken from the original paper.
This model was contributed by adirik. The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXt V2.
[ConvNextV2ForImageClassification] is supported by this examp... |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
ConvNextV2Config
[[autodoc]] ConvNextV2Config
ConvNextV2Model
[[autodoc]] ConvNextV2Model
... |
Donut
Overview
The Donut model was proposed in OCR-free Document Understanding Transformer by
Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
Donut consists of an image Transformer encoder and an autoregressive text Transform... |
Donut high-level overview. Taken from the original paper.
This model was contributed by nielsr. The original code can be found
here.
Usage tips
The quickest way to get started with Donut is by checking the tutorial
notebooks, which show how to use the model
at inference time as well as fine-tuning on custom data.... |
Inference examples
Donut's [VisionEncoderDecoder] model accepts images as input and makes use of
[~generation.GenerationMixin.generate] to autoregressively generate text given the input image.
The [DonutImageProcessor] class is responsible for preprocessing the input image and
[XLMRobertaTokenizer/XLMRobertaTokenizer... |
Step-by-step Document Image Classification |
import re
from transformers import DonutProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip")
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip")
device ... |
Step-by-step Document Parsing |
import re
from transformers import DonutProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2")
device ... |
Step-by-step Document Visual Question Answering (DocVQA) |
import re
from transformers import DonutProcessor, VisionEncoderDecoderModel
from datasets import load_dataset
import torch
processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
device = ... |
See the model hub to look for Donut checkpoints.
Training
We refer to the tutorial notebooks.
DonutSwinConfig
[[autodoc]] DonutSwinConfig
DonutImageProcessor
[[autodoc]] DonutImageProcessor
- preprocess
DonutFeatureExtractor
[[autodoc]] DonutFeatureExtractor
- call
DonutProcessor
[[autodoc]] DonutProcessor
... |
mLUKE
Overview
The mLUKE model was proposed in mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models by Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. It's a multilingual extension
of the LUKE model trained on the basis of XLM-RoBERTa.
It is based on XLM-RoBERTa and adds entity embedd... |
Note that mLUKE has its own tokenizer, [MLukeTokenizer]. You can initialize it as follows:
thon
from transformers import MLukeTokenizer
tokenizer = MLukeTokenizer.from_pretrained("studio-ousia/mluke-base")
As mLUKE's architecture is equivalent to that of LUKE, one can refer to LUKE's documentation page for all
tips, c... |
QDQBERT
Overview
The QDQBERT model can be referenced in Integer Quantization for Deep Learning Inference: Principles and Empirical
Evaluation by Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev and Paulius
Micikevicius.
The abstract from the paper is the following:
Quantization techniques can reduce the size of Deep... |
QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to (i) linear layer
inputs and weights, (ii) matmul inputs, (iii) residual add inputs, in BERT model.
QDQBERT requires the dependency of Pytorch Quantization Toolkit. To install pip install pytorch-quantization --extra-ind... |
Set default quantizers
QDQBERT model adds fake quantization operations (pair of QuantizeLinear/DequantizeLinear ops) to BERT by
TensorQuantizer in Pytorch Quantization Toolkit. TensorQuantizer is the module
for quantizing tensors, with QuantDescriptor defining how the tensor should be quantized. Refer to Pytorch
Quanti... |
import pytorch_quantization.nn as quant_nn
from pytorch_quantization.tensor_quant import QuantDescriptor
The default tensor quantizer is set to use Max calibration method
input_desc = QuantDescriptor(num_bits=8, calib_method="max")
The default tensor quantizer is set to be per-channel quantization for weights
weight_de... |
Calibration
Calibration is the terminology of passing data samples to the quantizer and deciding the best scaling factors for
tensors. After setting up the tensor quantizers, one can use the following example to calibrate the model:
thon
Find the TensorQuantizer and enable calibration
for name, module in model.named_m... |
Finalize calibration
for name, module in model.named_modules():
if name.endswith("_input_quantizer"):
module.load_calib_amax()
module.enable_quant()
If running on GPU, it needs to call .cuda() again because new tensors will be created by calibration process
model.cuda()
Keep running the quantized... |
Export to ONNX
The goal of exporting to ONNX is to deploy inference by TensorRT. Fake
quantization will be broken into a pair of QuantizeLinear/DequantizeLinear ONNX ops. After setting static member of
TensorQuantizer to use Pytorch’s own fake quantization functions, fake quantized model can be exported to ONNX, follow... |
from pytorch_quantization.nn import TensorQuantizer
TensorQuantizer.use_fb_fake_quant = True
Load the calibrated model
ONNX export
torch.onnx.export()
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling ta... |
QDQBertConfig
[[autodoc]] QDQBertConfig
QDQBertModel
[[autodoc]] QDQBertModel
- forward
QDQBertLMHeadModel
[[autodoc]] QDQBertLMHeadModel
- forward
QDQBertForMaskedLM
[[autodoc]] QDQBertForMaskedLM
- forward
QDQBertForSequenceClassification
[[autodoc]] QDQBertForSequenceClassification
- forward
QDQBer... |
BertGeneration
Overview
The BertGeneration model is a BERT model that can be leveraged for sequence-to-sequence tasks using
[EncoderDecoderModel] as proposed in Leveraging Pre-trained Checkpoints for Sequence Generation
Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
The abstract from the paper is the followi... |
leverage checkpoints for Bert2Bert model
use BERT's cls token as BOS token and sep token as EOS token
encoder = BertGenerationEncoder.from_pretrained("google-bert/bert-large-uncased", bos_token_id=101, eos_token_id=102)
add cross attention layers and use BERT's cls token as BOS token and sep token as EOS token
decode... |
Pretrained [EncoderDecoderModel] are also directly available in the model hub, e.g.:
thon |
instantiate sentence fusion model
sentence_fuser = EncoderDecoderModel.from_pretrained("google/roberta2roberta_L-24_discofuse")
tokenizer = AutoTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
input_ids = tokenizer(
"This is the first sentence. This is the second sentence.", add_special_tokens=Fa... |
Tips:
[BertGenerationEncoder] and [BertGenerationDecoder] should be used in
combination with [EncoderDecoder].
For summarization, sentence splitting, sentence fusion and translation, no special tokens are required for the input.
Therefore, no EOS token should be added to the end of the input. |
BertGenerationConfig
[[autodoc]] BertGenerationConfig
BertGenerationTokenizer
[[autodoc]] BertGenerationTokenizer
- save_vocabulary
BertGenerationEncoder
[[autodoc]] BertGenerationEncoder
- forward
BertGenerationDecoder
[[autodoc]] BertGenerationDecoder
- forward |
Transformer XL |
This model is in maintenance mode only, so we won't accept any new PRs changing its code. This model was deprecated due to security issues linked to pickle.load.
We recommend switching to more recent models for improved security.
In case you would still like to use TransfoXL in your experiments, we recommend using th... |
If you run into any issues running this model, please reinstall the last version that supported this model: v4.35.0.
You can do so by running the following command: pip install -U transformers==4.35.0. |
Overview
The Transformer-XL model was proposed in Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan
Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinusoïdal) embeddings which can
... |
Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The
original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left.
Transformer-XL is one of the few models that has no sequence length limit.
Same as a r... |
TransformerXL does not work with torch.nn.DataParallel due to a bug in PyTorch, see issue #36035
Resources
Text classification task guide
Causal language modeling task guide |
TransfoXLConfig
[[autodoc]] TransfoXLConfig
TransfoXLTokenizer
[[autodoc]] TransfoXLTokenizer
- save_vocabulary
TransfoXL specific outputs
[[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput
[[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput
[[autod... |
TransfoXLModel
[[autodoc]] TransfoXLModel
- forward
TransfoXLLMHeadModel
[[autodoc]] TransfoXLLMHeadModel
- forward
TransfoXLForSequenceClassification
[[autodoc]] TransfoXLForSequenceClassification
- forward
TFTransfoXLModel
[[autodoc]] TFTransfoXLModel
- call
TFTransfoXLLMHeadModel
[[autodoc]] TFTrans... |
Internal Layers
[[autodoc]] AdaptiveEmbedding
[[autodoc]] TFAdaptiveEmbedding |
DETA
Overview
The DETA model was proposed in NMS Strikes Back by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl.
DETA (short for Detection Transformers with Assignment) improves Deformable DETR by replacing the one-to-one bipartite Hungarian matching loss
with one-to-many label assignments used i... |
DETA overview. Taken from the original paper.
This model was contributed by nielsr.
The original code can be found here.
Resources
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETA.
Demo notebooks for DETA can be found here.
See also: Object detection task gui... |
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
DetaConfig
[[autodoc]] DetaConfig
DetaImageProcessor
[[autodoc]] DetaImageProcessor
- pre... |
Starcoder2
Overview
StarCoder2 is a family of open LLMs for code and comes in 3 different sizes with 3B, 7B and 15B parameters. The flagship StarCoder2-15B model is trained on over 4 trillion tokens and 600+ programming languages from The Stack v2. All models use Grouped Query Attention, a context window of 16,384 tok... |
The BigCode project, an open-scientific collaboration focused on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder2. In partnership with Software Heritage (SWH), we build The Stack v2 on top of the digital commons of their source code archive. Alongside the SWH repositori... |
License
The models are licensed under the BigCode OpenRAIL-M v1 license agreement.
Usage tips
The StarCoder2 models can be found in the HuggingFace hub. You can find some examples for inference and fine-tuning in StarCoder2's GitHub repo.
These ready-to-use checkpoints can be downloaded and used via the HuggingFace Hub... |
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2-7b", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoder2-7b")
prompt = "def print_hello_world():"
model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda"... |
Starcoder2Config
[[autodoc]] Starcoder2Config
Starcoder2Model
[[autodoc]] Starcoder2Model
- forward
Starcoder2ForCausalLM
[[autodoc]] Starcoder2ForCausalLM
- forward
Starcoder2ForSequenceClassification
[[autodoc]] Starcoder2ForSequenceClassification
- forward |
ELECTRA |
Overview
The ELECTRA model was proposed in the paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than
Generators. ELECTRA is a new pretraining approach which trains two
transformer models: the generator and the discriminator. The generator's role is to replace tokens in a sequence, and
is therefore t... |
ELECTRA is the pretraining approach, therefore there is nearly no changes done to the underlying model: BERT. The
only change is the separation of the embedding size and the hidden size: the embedding size is generally smaller,
while the hidden size is larger. An additional projection layer (linear) is used to pr... |
Resources
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide |
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
ElectraConfig
[[autodoc]] ElectraConfig
ElectraTokenizer
[[autodoc]] ElectraTokenizer
ElectraTokenizerFast
[[autodoc]] ElectraT... |
ElectraModel
[[autodoc]] ElectraModel
- forward
ElectraForPreTraining
[[autodoc]] ElectraForPreTraining
- forward
ElectraForCausalLM
[[autodoc]] ElectraForCausalLM
- forward
ElectraForMaskedLM
[[autodoc]] ElectraForMaskedLM
- forward
ElectraForSequenceClassification
[[autodoc]] ElectraForSequenceClass... |
TFElectraModel
[[autodoc]] TFElectraModel
- call
TFElectraForPreTraining
[[autodoc]] TFElectraForPreTraining
- call
TFElectraForMaskedLM
[[autodoc]] TFElectraForMaskedLM
- call
TFElectraForSequenceClassification
[[autodoc]] TFElectraForSequenceClassification
- call
TFElectraForMultipleChoice
[[autodoc... |
FlaxElectraModel
[[autodoc]] FlaxElectraModel
- call
FlaxElectraForPreTraining
[[autodoc]] FlaxElectraForPreTraining
- call
FlaxElectraForCausalLM
[[autodoc]] FlaxElectraForCausalLM
- call
FlaxElectraForMaskedLM
[[autodoc]] FlaxElectraForMaskedLM
- call
FlaxElectraForSequenceClassification
[[autodoc]]... |
RoBERTa-PreLayerNorm
Overview
The RoBERTa-PreLayerNorm model was proposed in fairseq: A Fast, Extensible Toolkit for Sequence Modeling by Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli.
It is identical to using the --encoder-normalize-before flag in fairseq.
The... |
The implementation is the same as Roberta except instead of using Add and Norm it does Norm and Add. Add and Norm refers to the Addition and LayerNormalization as described in Attention Is All You Need.
This is identical to using the --encoder-normalize-before flag in fairseq.
Resources
Text classification task guide... |
Text classification task guide
Token classification task guide
Question answering task guide
Causal language modeling task guide
Masked language modeling task guide
Multiple choice task guide
RobertaPreLayerNormConfig
[[autodoc]] RobertaPreLayerNormConfig |
RobertaPreLayerNormModel
[[autodoc]] RobertaPreLayerNormModel
- forward
RobertaPreLayerNormForCausalLM
[[autodoc]] RobertaPreLayerNormForCausalLM
- forward
RobertaPreLayerNormForMaskedLM
[[autodoc]] RobertaPreLayerNormForMaskedLM
- forward
RobertaPreLayerNormForSequenceClassification
[[autodoc]] RobertaPr... |
TFRobertaPreLayerNormModel
[[autodoc]] TFRobertaPreLayerNormModel
- call
TFRobertaPreLayerNormForCausalLM
[[autodoc]] TFRobertaPreLayerNormForCausalLM
- call
TFRobertaPreLayerNormForMaskedLM
[[autodoc]] TFRobertaPreLayerNormForMaskedLM
- call
TFRobertaPreLayerNormForSequenceClassification
[[autodoc]] TFRo... |
FlaxRobertaPreLayerNormModel
[[autodoc]] FlaxRobertaPreLayerNormModel
- call
FlaxRobertaPreLayerNormForCausalLM
[[autodoc]] FlaxRobertaPreLayerNormForCausalLM
- call
FlaxRobertaPreLayerNormForMaskedLM
[[autodoc]] FlaxRobertaPreLayerNormForMaskedLM
- call
FlaxRobertaPreLayerNormForSequenceClassification
[[... |
MobileViTV2
Overview
The MobileViTV2 model was proposed in Separable Self-attention for Mobile Vision Transformers by Sachin Mehta and Mohammad Rastegari.
MobileViTV2 is the second version of MobileViT, constructed by replacing the multi-headed self-attention in MobileViT with separable self-attention.
The abstract fr... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.