Buckets:
| # Data2Vec | |
| ## Overview | |
| Data2Vec モデルは、[data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language](https://huggingface.co/papers/2202.03555) で Alexei Baevski、Wei-Ning Hsu、Qiantong Xu、バArun Babu, Jiatao Gu and Michael Auli. | |
| Data2Vec は、テキスト、音声、画像などのさまざまなデータ モダリティにわたる自己教師あり学習のための統一フレームワークを提案します。 | |
| 重要なのは、事前トレーニングの予測ターゲットは、モダリティ固有のコンテキストに依存しないターゲットではなく、入力のコンテキスト化された潜在表現であることです。 | |
| 論文の要約は次のとおりです。 | |
| *自己教師あり学習の一般的な考え方はどのモダリティでも同じですが、実際のアルゴリズムと | |
| 単一のモダリティを念頭に置いて開発されたため、目的は大きく異なります。一般に近づけるために | |
| 自己教師あり学習では、どちらの音声に対しても同じ学習方法を使用するフレームワークである data2vec を紹介します。 | |
| NLP またはコンピューター ビジョン。中心となるアイデアは、完全な入力データの潜在的な表現を、 | |
| 標準の Transformer アーキテクチャを使用した自己蒸留セットアップの入力のマスクされたビュー。 | |
| 単語、視覚的トークン、人間の音声単位などのモダリティ固有のターゲットを予測するのではなく、 | |
| 本質的にローカルであるため、data2vec は、からの情報を含む文脈化された潜在表現を予測します。 | |
| 入力全体。音声認識、画像分類、および | |
| 自然言語理解は、新しい最先端技術や、主流のアプローチに匹敵するパフォーマンスを実証します。 | |
| モデルとコードは、www.github.com/pytorch/fairseq/tree/master/examples/data2vec.* で入手できます。 | |
| このモデルは、[edugp](https://huggingface.co/edugp) および [patrickvonplaten](https://huggingface.co/patrickvonplaten) によって提供されました。 | |
| [sayakpaul](https://github.com/sayakpaul) と [Rocketknight1](https://github.com/Rocketknight1) は、TensorFlow のビジョンに Data2Vec を提供しました。 | |
| 元のコード (NLP および音声用) は、[こちら](https://github.com/pytorch/fairseq/tree/main/examples/data2vec) にあります。 | |
| ビジョンの元のコードは [こちら](https://github.com/facebookresearch/data2vec_vision/tree/main/beit) にあります。 | |
| ## Usage tips | |
| - Data2VecAudio、Data2VecText、および Data2VecVision はすべて、同じ自己教師あり学習方法を使用してトレーニングされています。 | |
| - Data2VecAudio の場合、前処理は特徴抽出を含めて `Wav2Vec2Model` と同じです。 | |
| - Data2VecText の場合、前処理はトークン化を含めて `RobertaModel` と同じです。 | |
| - Data2VecVision の場合、前処理は特徴抽出を含めて [BeitModel](/docs/transformers/main/ja/model_doc/beit#transformers.BeitModel) と同じです。 | |
| ## Resources | |
| Data2Vec の使用を開始するのに役立つ公式 Hugging Face およびコミュニティ (🌎 で示される) リソースのリスト。 | |
| - [Data2VecVisionForImageClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionForImageClassification) は、この [サンプル スクリプト](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) および [ノートブック](https://cola.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb)。 | |
| - カスタム データセットで `TFData2VecVisionForImageClassification` を微調整するには、[このノートブック](https://colab.research.google.com/github/sayakpaul/TF-2.0-Hacks/blob/master/data2vec_vision_image_classification.ipynb) を参照してください。 )。 | |
| **Data2VecText ドキュメント リソース** | |
| - [テキスト分類タスクガイド(英語版)](../../en/tasks/sequence_classification) | |
| - [トークン分類タスクガイド](../tasks/token_classification) | |
| - [質問回答タスク ガイド](../tasks/question_answering) | |
| - [因果言語モデリング タスク ガイド](../tasks/language_modeling) | |
| - [マスク言語モデリング タスク ガイド](../tasks/masked_language_modeling) | |
| - [多肢選択タスク ガイド](../tasks/multiple_choice) | |
| **Data2VecAudio ドキュメント リソース** | |
| - [音声分類タスクガイド](../tasks/audio_classification) | |
| - [自動音声認識タスクガイド](../tasks/asr) | |
| **Data2VecVision ドキュメント リソース** | |
| - [画像分類](../tasks/image_classification) | |
| - [セマンティック セグメンテーション](../tasks/semantic_segmentation) | |
| ここに含めるリソースの送信に興味がある場合は、お気軽にプル リクエストを開いてください。審査させていただきます。リソースは、既存のリソースを複製するのではなく、何か新しいものを示すことが理想的です。 | |
| ## Data2VecTextConfig[[transformers.Data2VecTextConfig]] | |
| #### transformers.Data2VecTextConfig[[transformers.Data2VecTextConfig]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/configuration_data2vec_text.py#L24) | |
| This is the configuration class to store the configuration of a Data2Vec TextModel. It is used to instantiate a Data2Vec Text | |
| model according to the specified arguments, defining the model architecture. Instantiating a configuration with the | |
| defaults will yield a similar configuration to that of the [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) | |
| Configuration objects inherit from [PreTrainedConfig](/docs/transformers/main/ja/main_classes/configuration#transformers.PreTrainedConfig) and can be used to control the model outputs. Read the | |
| documentation from [PreTrainedConfig](/docs/transformers/main/ja/main_classes/configuration#transformers.PreTrainedConfig) for more information. | |
| Examples: | |
| ```python | |
| >>> from transformers import Data2VecTextConfig, Data2VecTextModel | |
| >>> # Initializing a Data2VecText facebook/data2vec-text-base style configuration | |
| >>> configuration = Data2VecTextConfig() | |
| >>> # Initializing a model (with random weights) from the facebook/data2vec-text-base style configuration | |
| >>> model = Data2VecTextModel(configuration) | |
| >>> # Accessing the model configuration | |
| >>> configuration = model.config | |
| ``` | |
| **Parameters:** | |
| vocab_size (`int`, *optional*, defaults to `30522`) : Vocabulary size of the model. Defines the number of different tokens that can be represented by the `input_ids`. | |
| hidden_size (`int`, *optional*, defaults to `768`) : Dimension of the hidden representations. | |
| num_hidden_layers (`int`, *optional*, defaults to `12`) : Number of hidden layers in the Transformer decoder. | |
| num_attention_heads (`int`, *optional*, defaults to `12`) : Number of attention heads for each attention layer in the Transformer decoder. | |
| intermediate_size (`int`, *optional*, defaults to `3072`) : Dimension of the MLP representations. | |
| hidden_act (`str`, *optional*, defaults to `gelu`) : The non-linear activation function (function or string) in the decoder. For example, `"gelu"`, `"relu"`, `"silu"`, etc. | |
| hidden_dropout_prob (`Union[float, int]`, *optional*, defaults to `0.1`) : The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. | |
| attention_probs_dropout_prob (`Union[float, int]`, *optional*, defaults to `0.1`) : The dropout ratio for the attention probabilities. | |
| max_position_embeddings (`int`, *optional*, defaults to `512`) : The maximum sequence length that this model might ever be used with. | |
| type_vocab_size (`int`, *optional*, defaults to `2`) : The vocabulary size of the `token_type_ids`. | |
| initializer_range (`float`, *optional*, defaults to `0.02`) : The standard deviation of the truncated_normal_initializer for initializing all weight matrices. | |
| layer_norm_eps (`float`, *optional*, defaults to `1e-12`) : The epsilon used by the layer normalization layers. | |
| pad_token_id (`int`, *optional*, defaults to `1`) : Token id used for padding in the vocabulary. | |
| bos_token_id (`int`, *optional*, defaults to `0`) : Token id used for beginning-of-stream in the vocabulary. | |
| eos_token_id (`Union[int, list[int]]`, *optional*, defaults to `2`) : Token id used for end-of-stream in the vocabulary. | |
| use_cache (`bool`, *optional*, defaults to `True`) : Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if `config.is_decoder=True` or when the model is a decoder-only generative model. | |
| classifier_dropout (`Union[float, int]`, *optional*) : The dropout ratio for classifier. | |
| is_decoder (`bool`, *optional*, defaults to `False`) : Whether the model is used as a decoder or not. If `False`, the model is used as an encoder. | |
| add_cross_attention (`bool`, *optional*, defaults to `False`) : Whether cross-attention layers should be added to the model. | |
| tie_word_embeddings (`bool`, *optional*, defaults to `True`) : Whether to tie weight embeddings according to model's `tied_weights_keys` mapping. | |
| ## Data2VecAudioConfig[[transformers.Data2VecAudioConfig]] | |
| #### transformers.Data2VecAudioConfig[[transformers.Data2VecAudioConfig]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/configuration_data2vec_audio.py#L26) | |
| This is the configuration class to store the configuration of a Data2Vec AudioModel. It is used to instantiate a Data2Vec Audio | |
| model according to the specified arguments, defining the model architecture. Instantiating a configuration with the | |
| defaults will yield a similar configuration to that of the [facebook/data2vec-audio-base-960h](https://huggingface.co/facebook/data2vec-audio-base-960h) | |
| Configuration objects inherit from [PreTrainedConfig](/docs/transformers/main/ja/main_classes/configuration#transformers.PreTrainedConfig) and can be used to control the model outputs. Read the | |
| documentation from [PreTrainedConfig](/docs/transformers/main/ja/main_classes/configuration#transformers.PreTrainedConfig) for more information. | |
| Example: | |
| ```python | |
| >>> from transformers import Data2VecAudioConfig, Data2VecAudioModel | |
| >>> # Initializing a Data2VecAudio facebook/data2vec-audio-base-960h style configuration | |
| >>> configuration = Data2VecAudioConfig() | |
| >>> # Initializing a model (with random weights) from the facebook/data2vec-audio-base-960h style configuration | |
| >>> model = Data2VecAudioModel(configuration) | |
| >>> # Accessing the model configuration | |
| >>> configuration = model.config | |
| ``` | |
| **Parameters:** | |
| vocab_size (`int`, *optional*, defaults to `32`) : Vocabulary size of the model. Defines the number of different tokens that can be represented by the `input_ids`. | |
| hidden_size (`int`, *optional*, defaults to `768`) : Dimension of the hidden representations. | |
| num_hidden_layers (`int`, *optional*, defaults to `12`) : Number of hidden layers in the Transformer decoder. | |
| num_attention_heads (`int`, *optional*, defaults to `12`) : Number of attention heads for each attention layer in the Transformer decoder. | |
| intermediate_size (`int`, *optional*, defaults to `3072`) : Dimension of the MLP representations. | |
| hidden_act (`str`, *optional*, defaults to `gelu`) : The non-linear activation function (function or string) in the decoder. For example, `"gelu"`, `"relu"`, `"silu"`, etc. | |
| hidden_dropout (`Union[float, int]`, *optional*, defaults to `0.1`) : The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. | |
| activation_dropout (`Union[float, int]`, *optional*, defaults to `0.1`) : The dropout ratio for activations inside the fully connected layer. | |
| attention_dropout (`Union[float, int]`, *optional*, defaults to `0.1`) : The dropout ratio for the attention probabilities. | |
| feat_proj_dropout (`float`, *optional*, defaults to 0.0) : The dropout probability for output of the feature encoder. | |
| final_dropout (`float`, *optional*, defaults to 0.1) : The dropout probability for the final projection layer of [Data2VecAudioForCTC](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForCTC). | |
| layerdrop (`Union[float, int]`, *optional*, defaults to `0.1`) : The LayerDrop probability. See the [LayerDrop paper](see https://huggingface.co/papers/1909.11556) for more details. | |
| initializer_range (`float`, *optional*, defaults to `0.02`) : The standard deviation of the truncated_normal_initializer for initializing all weight matrices. | |
| layer_norm_eps (`float`, *optional*, defaults to `1e-05`) : The epsilon used by the layer normalization layers. | |
| feat_extract_activation (`str, `optional`, defaults to `"gelu"`) : The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. | |
| conv_dim (`tuple[int]` or `list[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`) : A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers. | |
| conv_stride (`tuple[int]` or `list[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`) : A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*. | |
| conv_kernel (`Union[list[int], tuple[int, ...]]`, *optional*, defaults to `(10, 3, 3, 3, 3, 2, 2)`) : The size of the convolutional kernel. | |
| conv_bias (`bool`, *optional*, defaults to `False`) : Whether the 1D convolutional layers have a bias. | |
| num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16) : Number of groups of 1D convolutional positional embeddings layer. | |
| conv_pos_kernel_size (`int`, *optional*, defaults to `19`) : Kernel size of positional conv module. | |
| num_conv_pos_embeddings (`int`, *optional*, defaults to 128) : Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. | |
| mask_time_prob (`float`, *optional*, defaults to 0.05) : Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procedure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If reasoning from the probability of each feature vector to be chosen as the start of the vector span to be masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the | |
| mask_time_length (`int`, *optional*, defaults to 10) : Length of vector span along the time axis. | |
| mask_time_min_masks (`int`, *optional*, defaults to 2), : The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks'' | |
| mask_feature_prob (`float`, *optional*, defaults to 0.0) : Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procedure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over the axis. If reasoning from the probability of each feature vector to be chosen as the start of the vector span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. | |
| mask_feature_prob (`Union[float, int]`, *optional*, defaults to `0.0`) : Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procedure generates `mask_feature_prob*len(feature_axis)/mask_time_length` independent masks over the axis. If reasoning from the probability of each feature vector to be chosen as the start of the vector span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment` is `True`. | |
| mask_feature_length (`int`, *optional*, defaults to 10) : Length of vector span along the feature axis. | |
| mask_feature_min_masks (`int`, *optional*, defaults to 0) : The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks'' | |
| ctc_loss_reduction (`str`, *optional*, defaults to `"sum"`) : Specifies the reduction to apply to the output of `torch.nn.CTCLoss`. Only relevant when training an instance of [Data2VecAudioForCTC](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForCTC). | |
| ctc_zero_infinity (`bool`, *optional*, defaults to `False`) : Whether to zero infinite losses and the associated gradients of `torch.nn.CTCLoss`. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of [Data2VecAudioForCTC](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForCTC). | |
| use_weighted_layer_sum (`bool`, *optional*, defaults to `False`) : Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of [Data2VecAudioForSequenceClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForSequenceClassification). | |
| classifier_proj_size (`int`, *optional*, defaults to 256) : Dimensionality of the projection before token mean-pooling for classification. | |
| tdnn_dim (`tuple[int]` or `list[int]`, *optional*, defaults to `(512, 512, 512, 512, 1500)`) : A tuple of integers defining the number of output channels of each 1D convolutional layer in the *TDNN* module of the *XVector* model. The length of *tdnn_dim* defines the number of *TDNN* layers. | |
| tdnn_kernel (`tuple[int]` or `list[int]`, *optional*, defaults to `(5, 3, 3, 1, 1)`) : A tuple of integers defining the kernel size of each 1D convolutional layer in the *TDNN* module of the *XVector* model. The length of *tdnn_kernel* has to match the length of *tdnn_dim*. | |
| tdnn_dilation (`tuple[int]` or `list[int]`, *optional*, defaults to `(1, 2, 3, 1, 1)`) : A tuple of integers defining the dilation factor of each 1D convolutional layer in *TDNN* module of the *XVector* model. The length of *tdnn_dilation* has to match the length of *tdnn_dim*. | |
| xvector_output_dim (`int`, *optional*, defaults to 512) : Dimensionality of the *XVector* embedding vectors. | |
| pad_token_id (`int`, *optional*, defaults to `0`) : Token id used for padding in the vocabulary. | |
| bos_token_id (`int`, *optional*, defaults to `1`) : Token id used for beginning-of-stream in the vocabulary. | |
| eos_token_id (`Union[int, list[int]]`, *optional*, defaults to `2`) : Token id used for end-of-stream in the vocabulary. | |
| add_adapter (`bool`, *optional*, defaults to `False`) : Whether a convolutional network should be stacked on top of the Data2VecAudio Encoder. Can be very useful for warm-starting Data2VecAudio for SpeechEncoderDecoder models. | |
| adapter_kernel_size (`int`, *optional*, defaults to 3) : Kernel size of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`. | |
| adapter_stride (`int`, *optional*, defaults to 2) : Stride of the convolutional layers in the adapter network. Only relevant if `add_adapter is True`. | |
| num_adapter_layers (`int`, *optional*, defaults to 3) : Number of convolutional layers that should be used in the adapter network. Only relevant if `add_adapter is True`. | |
| output_hidden_size (`int`, *optional*) : Dimensionality of the encoder output layer. If not defined, this defaults to *hidden-size*. Only relevant if `add_adapter is True`. | |
| ## Data2VecVisionConfig[[transformers.Data2VecVisionConfig]] | |
| #### transformers.Data2VecVisionConfig[[transformers.Data2VecVisionConfig]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/configuration_data2vec_vision.py#L24) | |
| This is the configuration class to store the configuration of a Data2Vec VisionModel. It is used to instantiate a Data2Vec Vision | |
| model according to the specified arguments, defining the model architecture. Instantiating a configuration with the | |
| defaults will yield a similar configuration to that of the [facebook/data2vec-vision-base](https://huggingface.co/facebook/data2vec-vision-base) | |
| Configuration objects inherit from [PreTrainedConfig](/docs/transformers/main/ja/main_classes/configuration#transformers.PreTrainedConfig) and can be used to control the model outputs. Read the | |
| documentation from [PreTrainedConfig](/docs/transformers/main/ja/main_classes/configuration#transformers.PreTrainedConfig) for more information. | |
| Example: | |
| ```python | |
| >>> from transformers import Data2VecVisionConfig, Data2VecVisionModel | |
| >>> # Initializing a Data2VecVision data2vec_vision-base-patch16-224-in22k style configuration | |
| >>> configuration = Data2VecVisionConfig() | |
| >>> # Initializing a model (with random weights) from the data2vec_vision-base-patch16-224-in22k style configuration | |
| >>> model = Data2VecVisionModel(configuration) | |
| >>> # Accessing the model configuration | |
| >>> configuration = model.config | |
| ``` | |
| **Parameters:** | |
| hidden_size (`int`, *optional*, defaults to `768`) : Dimension of the hidden representations. | |
| num_hidden_layers (`int`, *optional*, defaults to `12`) : Number of hidden layers in the Transformer decoder. | |
| num_attention_heads (`int`, *optional*, defaults to `12`) : Number of attention heads for each attention layer in the Transformer decoder. | |
| intermediate_size (`int`, *optional*, defaults to `3072`) : Dimension of the MLP representations. | |
| hidden_act (`str`, *optional*, defaults to `gelu`) : The non-linear activation function (function or string) in the decoder. For example, `"gelu"`, `"relu"`, `"silu"`, etc. | |
| hidden_dropout_prob (`Union[float, int]`, *optional*, defaults to `0.0`) : The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. | |
| attention_probs_dropout_prob (`Union[float, int]`, *optional*, defaults to `0.0`) : The dropout ratio for the attention probabilities. | |
| initializer_range (`float`, *optional*, defaults to `0.02`) : The standard deviation of the truncated_normal_initializer for initializing all weight matrices. | |
| layer_norm_eps (`float`, *optional*, defaults to `1e-12`) : The epsilon used by the layer normalization layers. | |
| image_size (`Union[int, list[int], tuple[int, int]]`, *optional*, defaults to `224`) : The size (resolution) of each image. | |
| patch_size (`Union[int, list[int], tuple[int, int]]`, *optional*, defaults to `16`) : The size (resolution) of each patch. | |
| num_channels (`int`, *optional*, defaults to `3`) : The number of input channels. | |
| use_mask_token (`bool`, *optional*, defaults to `False`) : Whether to use a mask token for masked image modeling. | |
| use_absolute_position_embeddings (`bool`, *optional*, defaults to `False`) : Whether to use absolute position embeddings. | |
| use_relative_position_bias (`bool`, *optional*, defaults to `False`) : Whether to use relative position bias in the self-attention layers. | |
| use_shared_relative_position_bias (`bool`, *optional*, defaults to `False`) : Whether to use the same relative position embeddings across all self-attention layers of the Transformer. | |
| layer_scale_init_value (`float`, *optional*, defaults to `0.1`) : Scale to use in the self-attention layers. 0.1 for base, 1e-6 for large. Set 0 to disable layer scale. | |
| drop_path_rate (`Union[float, int]`, *optional*, defaults to `0.1`) : Drop path rate for the patch fusion. | |
| use_mean_pooling (`bool`, *optional*, defaults to `True`) : Whether to mean pool the final hidden states of the patches instead of using the final hidden state of the CLS token, before applying the classification head. | |
| out_indices (`Union[list[int], tuple[int, ...]]`, *optional*, defaults to `(3, 5, 7, 11)`) : Indices of the intermediate hidden states (feature maps) to return from the backbone. Each index corresponds to one stage of the model. | |
| pool_scales (`tuple[int]`, *optional*, defaults to `[1, 2, 3, 6]`) : Pooling scales used in Pooling Pyramid Module applied on the last feature map. | |
| use_auxiliary_head (`bool`, *optional*, defaults to `True`) : Whether to use an auxiliary head during training. | |
| auxiliary_loss_weight (`float`, *optional*, defaults to 0.4) : Weight of the cross-entropy loss of the auxiliary head. | |
| auxiliary_channels (`int`, *optional*, defaults to 256) : Number of channels to use in the auxiliary head. | |
| auxiliary_num_convs (`int`, *optional*, defaults to 1) : Number of convolutional layers to use in the auxiliary head. | |
| auxiliary_concat_input (`bool`, *optional*, defaults to `False`) : Whether to concatenate the output of the auxiliary head with the input before the classification layer. | |
| semantic_loss_ignore_index (`int`, *optional*, defaults to `255`) : The index that is ignored by the loss function of the semantic segmentation model. | |
| ## Data2VecAudioModel[[transformers.Data2VecAudioModel]] | |
| #### transformers.Data2VecAudioModel[[transformers.Data2VecAudioModel]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_audio.py#L671) | |
| The bare Data2Vec Audio Model outputting raw hidden-states without any specific head on top. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecAudioModel.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_audio.py#L742[{"name": "input_values", "val": ": torch.Tensor | None"}, {"name": "attention_mask", "val": ": torch.Tensor | None = None"}, {"name": "mask_time_indices", "val": ": torch.FloatTensor | None = None"}, {"name": "output_attentions", "val": ": bool | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]- **input_values** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file | |
| into an array of type `list[float]`, a `numpy.ndarray` or a `torch.Tensor`, *e.g.* via the torchcodec library | |
| (`pip install torchcodec`) or the soundfile library (`pip install soundfile`). | |
| To prepare the array into `input_values`, the [AutoProcessor](/docs/transformers/main/ja/model_doc/auto#transformers.AutoProcessor) should be used for padding and conversion | |
| into a tensor of type `torch.FloatTensor`. See `processor_class.__call__` for details. | |
| - **attention_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **mask_time_indices** (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict | |
| masked extracted features in *config.proj_codevector_dim* space. | |
| - **output_attentions** (`bool`, *optional*) -- | |
| Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned | |
| tensors for more detail. | |
| - **output_hidden_states** (`bool`, *optional*) -- | |
| Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for | |
| more detail. | |
| - **return_dict** (`bool`, *optional*) -- | |
| Whether or not to return a [ModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple.0[Wav2Vec2BaseModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.Wav2Vec2BaseModelOutput) or `tuple(torch.FloatTensor)`A [Wav2Vec2BaseModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.Wav2Vec2BaseModelOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) and inputs. | |
| The [Data2VecAudioModel](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioModel) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **last_hidden_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`) -- Sequence of hidden-states at the output of the last layer of the model. | |
| - **extract_features** (`torch.FloatTensor` of shape `(batch_size, sequence_length, conv_dim[-1])`) -- Sequence of extracted feature vectors of the last convolutional layer of the model. | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of | |
| shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| **Parameters:** | |
| config ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[Wav2Vec2BaseModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.Wav2Vec2BaseModelOutput) or `tuple(torch.FloatTensor)`` | |
| A [Wav2Vec2BaseModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.Wav2Vec2BaseModelOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) and inputs. | |
| ## Data2VecAudioForAudioFrameClassification[[transformers.Data2VecAudioForAudioFrameClassification]] | |
| #### transformers.Data2VecAudioForAudioFrameClassification[[transformers.Data2VecAudioForAudioFrameClassification]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_audio.py#L1029) | |
| The Data2Vec Audio Model with a frame classification head on top for tasks like Speaker Diarization. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecAudioForAudioFrameClassification.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_audio.py#L1061[{"name": "input_values", "val": ": torch.Tensor | None"}, {"name": "attention_mask", "val": ": torch.Tensor | None = None"}, {"name": "labels", "val": ": torch.Tensor | None = None"}, {"name": "output_attentions", "val": ": bool | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]- **input_values** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`) -- | |
| Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file | |
| into an array of type `list[float]`, a `numpy.ndarray` or a `torch.Tensor`, *e.g.* via the torchcodec library | |
| (`pip install torchcodec`) or the soundfile library (`pip install soundfile`). | |
| To prepare the array into `input_values`, the [AutoProcessor](/docs/transformers/main/ja/model_doc/auto#transformers.AutoProcessor) should be used for padding and conversion | |
| into a tensor of type `torch.FloatTensor`. See `Data2VecAudioProcessor.__call__` for details. | |
| - **attention_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **labels** (`torch.LongTensor` of shape `(batch_size,)`, *optional*) -- | |
| Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., | |
| config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If | |
| `config.num_labels > 1` a classification loss is computed (Cross-Entropy). | |
| - **output_attentions** (`bool`, *optional*) -- | |
| Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned | |
| tensors for more detail. | |
| - **output_hidden_states** (`bool`, *optional*) -- | |
| Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for | |
| more detail. | |
| - **return_dict** (`bool`, *optional*) -- | |
| Whether or not to return a [ModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple.0[TokenClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)`A [TokenClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) and inputs. | |
| The [Data2VecAudioForAudioFrameClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForAudioFrameClassification) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Classification loss. | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.num_labels)`) -- Classification scores (before SoftMax). | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example: | |
| ```python | |
| >>> from transformers import AutoFeatureExtractor, Data2VecAudioForAudioFrameClassification | |
| >>> from datasets import load_dataset | |
| >>> import torch | |
| >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") | |
| >>> dataset = dataset.sort("id") | |
| >>> sampling_rate = dataset.features["audio"].sampling_rate | |
| >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/data2vec-audio-base-960h") | |
| >>> model = Data2VecAudioForAudioFrameClassification.from_pretrained("facebook/data2vec-audio-base-960h") | |
| >>> # audio file is decoded on the fly | |
| >>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate) | |
| >>> with torch.no_grad(): | |
| ... logits = model(**inputs).logits | |
| >>> probabilities = torch.sigmoid(logits[0]) | |
| >>> # labels is a one-hot array of shape (num_frames, num_speakers) | |
| >>> labels = (probabilities > 0.5).long() | |
| >>> labels[0].tolist() | |
| ... | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecAudioForAudioFrameClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForAudioFrameClassification)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[TokenClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)`` | |
| A [TokenClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) and inputs. | |
| ## Data2VecAudioForCTC[[transformers.Data2VecAudioForCTC]] | |
| #### transformers.Data2VecAudioForCTC[[transformers.Data2VecAudioForCTC]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_audio.py#L810) | |
| Data2VecAudio Model with a `language modeling` head on top for Connectionist Temporal Classification (CTC). | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecAudioForCTC.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_audio.py#L845[{"name": "input_values", "val": ": torch.Tensor | None"}, {"name": "attention_mask", "val": ": torch.Tensor | None = None"}, {"name": "output_attentions", "val": ": bool | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "labels", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]- **input_values** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file | |
| into an array of type `list[float]`, a `numpy.ndarray` or a `torch.Tensor`, *e.g.* via the torchcodec library | |
| (`pip install torchcodec`) or the soundfile library (`pip install soundfile`). | |
| To prepare the array into `input_values`, the [AutoProcessor](/docs/transformers/main/ja/model_doc/auto#transformers.AutoProcessor) should be used for padding and conversion | |
| into a tensor of type `torch.FloatTensor`. See `processor_class.__call__` for details. | |
| - **attention_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **output_attentions** (`bool`, *optional*) -- | |
| Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned | |
| tensors for more detail. | |
| - **output_hidden_states** (`bool`, *optional*) -- | |
| Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for | |
| more detail. | |
| - **return_dict** (`bool`, *optional*) -- | |
| Whether or not to return a [ModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. | |
| - **labels** (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*) -- | |
| Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to | |
| the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size - 1]`. | |
| All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., | |
| config.vocab_size - 1]`.0[CausalLMOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.CausalLMOutput) or `tuple(torch.FloatTensor)`A [CausalLMOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.CausalLMOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) and inputs. | |
| The [Data2VecAudioForCTC](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForCTC) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Language modeling loss (for next-token prediction). | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) -- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example: | |
| ```python | |
| >>> from transformers import AutoProcessor, Data2VecAudioForCTC | |
| >>> from datasets import load_dataset | |
| >>> import torch | |
| >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") | |
| >>> dataset = dataset.sort("id") | |
| >>> sampling_rate = dataset.features["audio"].sampling_rate | |
| >>> processor = AutoProcessor.from_pretrained("facebook/data2vec-audio-base-960h") | |
| >>> model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-base-960h") | |
| >>> # audio file is decoded on the fly | |
| >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") | |
| >>> with torch.no_grad(): | |
| ... logits = model(**inputs).logits | |
| >>> predicted_ids = torch.argmax(logits, dim=-1) | |
| >>> # transcribe speech | |
| >>> transcription = processor.batch_decode(predicted_ids) | |
| >>> transcription[0] | |
| ... | |
| >>> inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids | |
| >>> # compute loss | |
| >>> loss = model(**inputs).loss | |
| >>> round(loss.item(), 2) | |
| ... | |
| ``` | |
| **Parameters:** | |
| config ([`[Data2VecAudioForCTC](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForCTC)`]) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[CausalLMOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.CausalLMOutput) or `tuple(torch.FloatTensor)`` | |
| A [CausalLMOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.CausalLMOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) and inputs. | |
| ## Data2VecAudioForSequenceClassification[[transformers.Data2VecAudioForSequenceClassification]] | |
| #### transformers.Data2VecAudioForSequenceClassification[[transformers.Data2VecAudioForSequenceClassification]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_audio.py#L924) | |
| Data2VecAudio Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like | |
| SUPERB Keyword Spotting. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecAudioForSequenceClassification.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_audio.py#L957[{"name": "input_values", "val": ": torch.Tensor | None"}, {"name": "attention_mask", "val": ": torch.Tensor | None = None"}, {"name": "output_attentions", "val": ": bool | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "labels", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]- **input_values** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`) -- | |
| Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file | |
| into an array of type `list[float]`, a `numpy.ndarray` or a `torch.Tensor`, *e.g.* via the torchcodec library | |
| (`pip install torchcodec`) or the soundfile library (`pip install soundfile`). | |
| To prepare the array into `input_values`, the [AutoProcessor](/docs/transformers/main/ja/model_doc/auto#transformers.AutoProcessor) should be used for padding and conversion | |
| into a tensor of type `torch.FloatTensor`. See `Data2VecAudioProcessor.__call__` for details. | |
| - **attention_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **output_attentions** (`bool`, *optional*) -- | |
| Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned | |
| tensors for more detail. | |
| - **output_hidden_states** (`bool`, *optional*) -- | |
| Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for | |
| more detail. | |
| - **return_dict** (`bool`, *optional*) -- | |
| Whether or not to return a [ModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. | |
| - **labels** (`torch.LongTensor` of shape `(batch_size,)`, *optional*) -- | |
| Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., | |
| config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If | |
| `config.num_labels > 1` a classification loss is computed (Cross-Entropy).0[SequenceClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)`A [SequenceClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) and inputs. | |
| The [Data2VecAudioForSequenceClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForSequenceClassification) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Classification (or regression if config.num_labels==1) loss. | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`) -- Classification (or regression if config.num_labels==1) scores (before SoftMax). | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example of single-label classification: | |
| ```python | |
| >>> import torch | |
| >>> from transformers import AutoTokenizer, Data2VecAudioForSequenceClassification | |
| >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-audio-base-960h") | |
| >>> model = Data2VecAudioForSequenceClassification.from_pretrained("facebook/data2vec-audio-base-960h") | |
| >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") | |
| >>> with torch.no_grad(): | |
| ... logits = model(**inputs).logits | |
| >>> predicted_class_id = logits.argmax().item() | |
| >>> model.config.id2label[predicted_class_id] | |
| ... | |
| >>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)` | |
| >>> num_labels = len(model.config.id2label) | |
| >>> model = Data2VecAudioForSequenceClassification.from_pretrained("facebook/data2vec-audio-base-960h", num_labels=num_labels) | |
| >>> labels = torch.tensor([1]) | |
| >>> loss = model(**inputs, labels=labels).loss | |
| >>> round(loss.item(), 2) | |
| ... | |
| ``` | |
| Example of multi-label classification: | |
| ```python | |
| >>> import torch | |
| >>> from transformers import AutoTokenizer, Data2VecAudioForSequenceClassification | |
| >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-audio-base-960h") | |
| >>> model = Data2VecAudioForSequenceClassification.from_pretrained("facebook/data2vec-audio-base-960h", problem_type="multi_label_classification") | |
| >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") | |
| >>> with torch.no_grad(): | |
| ... logits = model(**inputs).logits | |
| >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] | |
| >>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)` | |
| >>> num_labels = len(model.config.id2label) | |
| >>> model = Data2VecAudioForSequenceClassification.from_pretrained( | |
| ... "facebook/data2vec-audio-base-960h", num_labels=num_labels, problem_type="multi_label_classification" | |
| ... ) | |
| >>> labels = torch.sum( | |
| ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 | |
| ... ).to(torch.float) | |
| >>> loss = model(**inputs, labels=labels).loss | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecAudioForSequenceClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForSequenceClassification)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[SequenceClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)`` | |
| A [SequenceClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) and inputs. | |
| ## Data2VecAudioForXVector[[transformers.Data2VecAudioForXVector]] | |
| #### transformers.Data2VecAudioForXVector[[transformers.Data2VecAudioForXVector]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_audio.py#L1183) | |
| Data2VecAudio Model with an XVector feature extraction head on top for tasks like Speaker Verification. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecAudioForXVector.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_audio.py#L1233[{"name": "input_values", "val": ": torch.Tensor | None"}, {"name": "attention_mask", "val": ": torch.Tensor | None = None"}, {"name": "output_attentions", "val": ": bool | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "labels", "val": ": torch.Tensor | None = None"}, {"name": "**kwargs", "val": ""}]- **input_values** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`) -- | |
| Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file | |
| into an array of type `list[float]`, a `numpy.ndarray` or a `torch.Tensor`, *e.g.* via the torchcodec library | |
| (`pip install torchcodec`) or the soundfile library (`pip install soundfile`). | |
| To prepare the array into `input_values`, the [AutoProcessor](/docs/transformers/main/ja/model_doc/auto#transformers.AutoProcessor) should be used for padding and conversion | |
| into a tensor of type `torch.FloatTensor`. See `Data2VecAudioProcessor.__call__` for details. | |
| - **attention_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **output_attentions** (`bool`, *optional*) -- | |
| Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned | |
| tensors for more detail. | |
| - **output_hidden_states** (`bool`, *optional*) -- | |
| Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for | |
| more detail. | |
| - **return_dict** (`bool`, *optional*) -- | |
| Whether or not to return a [ModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple. | |
| - **labels** (`torch.LongTensor` of shape `(batch_size,)`, *optional*) -- | |
| Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., | |
| config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If | |
| `config.num_labels > 1` a classification loss is computed (Cross-Entropy).0[XVectorOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.XVectorOutput) or `tuple(torch.FloatTensor)`A [XVectorOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.XVectorOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) and inputs. | |
| The [Data2VecAudioForXVector](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForXVector) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Classification loss. | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, config.xvector_output_dim)`) -- Classification hidden states before AMSoftmax. | |
| - **embeddings** (`torch.FloatTensor` of shape `(batch_size, config.xvector_output_dim)`) -- Utterance embeddings used for vector similarity-based retrieval. | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of | |
| shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example: | |
| ```python | |
| >>> from transformers import AutoFeatureExtractor, Data2VecAudioForXVector | |
| >>> from datasets import load_dataset | |
| >>> import torch | |
| >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") | |
| >>> dataset = dataset.sort("id") | |
| >>> sampling_rate = dataset.features["audio"].sampling_rate | |
| >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/data2vec-audio-base-960h") | |
| >>> model = Data2VecAudioForXVector.from_pretrained("facebook/data2vec-audio-base-960h") | |
| >>> # audio file is decoded on the fly | |
| >>> inputs = feature_extractor( | |
| ... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True | |
| ... ) | |
| >>> with torch.no_grad(): | |
| ... embeddings = model(**inputs).embeddings | |
| >>> embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu() | |
| >>> # the resulting embeddings can be used for cosine similarity-based retrieval | |
| >>> cosine_sim = torch.nn.CosineSimilarity(dim=-1) | |
| >>> similarity = cosine_sim(embeddings[0], embeddings[1]) | |
| >>> threshold = 0.7 # the optimal threshold is dataset-dependent | |
| >>> if similarity >> round(similarity.item(), 2) | |
| ... | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecAudioForXVector](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioForXVector)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[XVectorOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.XVectorOutput) or `tuple(torch.FloatTensor)`` | |
| A [XVectorOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.XVectorOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecAudioConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecAudioConfig)) and inputs. | |
| ## Data2VecTextModel[[transformers.Data2VecTextModel]] | |
| #### transformers.Data2VecTextModel[[transformers.Data2VecTextModel]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L540) | |
| The bare Data2Vec Text Text Model outputting raw hidden-states without any specific head on to. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecTextModel.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L566[{"name": "input_ids", "val": ": torch.Tensor | None = None"}, {"name": "attention_mask", "val": ": torch.Tensor | None = None"}, {"name": "token_type_ids", "val": ": torch.Tensor | None = None"}, {"name": "position_ids", "val": ": torch.Tensor | None = None"}, {"name": "inputs_embeds", "val": ": torch.Tensor | None = None"}, {"name": "encoder_hidden_states", "val": ": torch.Tensor | None = None"}, {"name": "encoder_attention_mask", "val": ": torch.Tensor | None = None"}, {"name": "past_key_values", "val": ": transformers.cache_utils.Cache | None = None"}, {"name": "use_cache", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs]"}]- **input_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of input sequence tokens in the vocabulary. Padding will be ignored by default. | |
| Indices can be obtained using [AutoTokenizer](/docs/transformers/main/ja/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and | |
| [PreTrainedTokenizer.__call__()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details. | |
| [What are input IDs?](../glossary#input-ids) | |
| - **attention_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **token_type_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: | |
| - 0 corresponds to a *sentence A* token, | |
| - 1 corresponds to a *sentence B* token. | |
| [What are token type IDs?](../glossary#token-type-ids) | |
| - **position_ids** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.n_positions - 1]`. | |
| [What are position IDs?](../glossary#position-ids) | |
| - **inputs_embeds** (`torch.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) -- | |
| Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This | |
| is useful if you want more control over how to convert `input_ids` indices into associated vectors than the | |
| model's internal embedding lookup matrix. | |
| - **encoder_hidden_states** (`torch.Tensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) -- | |
| Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention | |
| if the model is configured as a decoder. | |
| - **encoder_attention_mask** (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in | |
| the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| - **past_key_values** (`~cache_utils.Cache`, *optional*) -- | |
| Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention | |
| blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values` | |
| returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`. | |
| Only `Cache` instance is allowed as input, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). | |
| If no `past_key_values` are passed, `DynamicCache` will be initialized by default. | |
| The model will output the same cache format that is fed as input. | |
| If `past_key_values` are used, the user is expected to input only unprocessed `input_ids` (those that don't | |
| have their past key value states given to this model) of shape `(batch_size, unprocessed_length)` instead of all `input_ids` | |
| of shape `(batch_size, sequence_length)`. | |
| - **use_cache** (`bool`, *optional*) -- | |
| If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see | |
| `past_key_values`).0[BaseModelOutputWithPoolingAndCrossAttentions](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions) or `tuple(torch.FloatTensor)`A [BaseModelOutputWithPoolingAndCrossAttentions](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| The [Data2VecTextModel](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextModel) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **last_hidden_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`) -- Sequence of hidden-states at the output of the last layer of the model. | |
| - **pooler_output** (`torch.FloatTensor` of shape `(batch_size, hidden_size)`) -- Last layer hidden-state of the first token of the sequence (classification token) after further processing | |
| through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns | |
| the classification token after processing through a linear layer and a tanh activation function. The linear | |
| layer weights are trained from the next sentence prediction (classification) objective during pretraining. | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| - **cross_attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` and `config.add_cross_attention=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the | |
| weighted average in the cross-attention heads. | |
| - **past_key_values** (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`) -- It is a `Cache` instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). | |
| Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if | |
| `config.is_encoder_decoder=True` in the cross-attention blocks) that can be used (see `past_key_values` | |
| input) to speed up sequential decoding. | |
| **Parameters:** | |
| config ([Data2VecTextModel](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextModel)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| add_pooling_layer (`bool`, *optional*, defaults to `True`) : Whether to add a pooling layer | |
| **Returns:** | |
| `[BaseModelOutputWithPoolingAndCrossAttentions](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions) or `tuple(torch.FloatTensor)`` | |
| A [BaseModelOutputWithPoolingAndCrossAttentions](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| ## Data2VecTextForCausalLM[[transformers.Data2VecTextForCausalLM]] | |
| #### transformers.Data2VecTextForCausalLM[[transformers.Data2VecTextForCausalLM]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L716) | |
| Data2VecText Model with a `language modeling` head on top for CLM fine-tuning. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecTextForCausalLM.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L740[{"name": "input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "attention_mask", "val": ": torch.FloatTensor | None = None"}, {"name": "token_type_ids", "val": ": torch.LongTensor | None = None"}, {"name": "position_ids", "val": ": torch.LongTensor | None = None"}, {"name": "inputs_embeds", "val": ": torch.FloatTensor | None = None"}, {"name": "encoder_hidden_states", "val": ": torch.FloatTensor | None = None"}, {"name": "encoder_attention_mask", "val": ": torch.FloatTensor | None = None"}, {"name": "labels", "val": ": torch.LongTensor | None = None"}, {"name": "past_key_values", "val": ": tuple[tuple[torch.FloatTensor]] | None = None"}, {"name": "use_cache", "val": ": bool | None = None"}, {"name": "logits_to_keep", "val": ": int | torch.Tensor = 0"}, {"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs]"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of input sequence tokens in the vocabulary. Padding will be ignored by default. | |
| Indices can be obtained using [AutoTokenizer](/docs/transformers/main/ja/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and | |
| [PreTrainedTokenizer.__call__()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details. | |
| [What are input IDs?](../glossary#input-ids) | |
| - **attention_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **token_type_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: | |
| - 0 corresponds to a *sentence A* token, | |
| - 1 corresponds to a *sentence B* token. | |
| [What are token type IDs?](../glossary#token-type-ids) | |
| - **position_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.n_positions - 1]`. | |
| [What are position IDs?](../glossary#position-ids) | |
| - **inputs_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) -- | |
| Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This | |
| is useful if you want more control over how to convert `input_ids` indices into associated vectors than the | |
| model's internal embedding lookup matrix. | |
| - **encoder_hidden_states** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) -- | |
| Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention | |
| if the model is configured as a decoder. | |
| - **encoder_attention_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in | |
| the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| - **labels** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in | |
| `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are | |
| ignored (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` | |
| - **past_key_values** (`tuple[tuple[torch.FloatTensor]]`, *optional*) -- | |
| Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention | |
| blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values` | |
| returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`. | |
| Only `Cache` instance is allowed as input, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). | |
| If no `past_key_values` are passed, `DynamicCache` will be initialized by default. | |
| The model will output the same cache format that is fed as input. | |
| If `past_key_values` are used, the user is expected to input only unprocessed `input_ids` (those that don't | |
| have their past key value states given to this model) of shape `(batch_size, unprocessed_length)` instead of all `input_ids` | |
| of shape `(batch_size, sequence_length)`. | |
| - **use_cache** (`bool`, *optional*) -- | |
| If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see | |
| `past_key_values`). | |
| - **logits_to_keep** (`Union[int, torch.Tensor]`, *optional*, defaults to `0`) -- | |
| If an `int`, compute logits for the last `logits_to_keep` tokens. If `0`, calculate logits for all | |
| `input_ids` (special case). Only last token logits are needed for generation, and calculating them only for that | |
| token can save memory, which becomes pretty significant for long sequences or large vocabulary size. | |
| If a `torch.Tensor`, must be 1D corresponding to the indices to keep in the sequence length dimension. | |
| This is useful when using packed tensor format (single dimension for batch and sequence length).0[CausalLMOutputWithCrossAttentions](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions) or `tuple(torch.FloatTensor)`A [CausalLMOutputWithCrossAttentions](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| The [Data2VecTextForCausalLM](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForCausalLM) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Language modeling loss (for next-token prediction). | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) -- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| - **cross_attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Cross attentions weights after the attention softmax, used to compute the weighted average in the | |
| cross-attention heads. | |
| - **past_key_values** (`Cache`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`) -- It is a `Cache` instance. For more details, see our [kv cache guide](https://huggingface.co/docs/transformers/en/kv_cache). | |
| Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see | |
| `past_key_values` input) to speed up sequential decoding. | |
| Example: | |
| ```python | |
| >>> from transformers import AutoTokenizer, Data2VecTextForCausalLM, Data2VecTextConfig | |
| >>> import torch | |
| >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") | |
| >>> config = Data2VecTextConfig.from_pretrained("facebook/data2vec-text-base") | |
| >>> config.is_decoder = True | |
| >>> model = Data2VecTextForCausalLM.from_pretrained("facebook/data2vec-text-base", config=config) | |
| >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") | |
| >>> outputs = model(**inputs) | |
| >>> prediction_logits = outputs.logits | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecTextForCausalLM](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForCausalLM)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[CausalLMOutputWithCrossAttentions](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions) or `tuple(torch.FloatTensor)`` | |
| A [CausalLMOutputWithCrossAttentions](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.CausalLMOutputWithCrossAttentions) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| ## Data2VecTextForMaskedLM[[transformers.Data2VecTextForMaskedLM]] | |
| #### transformers.Data2VecTextForMaskedLM[[transformers.Data2VecTextForMaskedLM]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L816) | |
| The Data2Vec Text Model with a `language modeling` head on top." | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecTextForMaskedLM.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L843[{"name": "input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "attention_mask", "val": ": torch.FloatTensor | None = None"}, {"name": "token_type_ids", "val": ": torch.LongTensor | None = None"}, {"name": "position_ids", "val": ": torch.LongTensor | None = None"}, {"name": "inputs_embeds", "val": ": torch.FloatTensor | None = None"}, {"name": "encoder_hidden_states", "val": ": torch.FloatTensor | None = None"}, {"name": "encoder_attention_mask", "val": ": torch.FloatTensor | None = None"}, {"name": "labels", "val": ": torch.LongTensor | None = None"}, {"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs]"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of input sequence tokens in the vocabulary. Padding will be ignored by default. | |
| Indices can be obtained using [AutoTokenizer](/docs/transformers/main/ja/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and | |
| [PreTrainedTokenizer.__call__()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details. | |
| [What are input IDs?](../glossary#input-ids) | |
| - **attention_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **token_type_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: | |
| - 0 corresponds to a *sentence A* token, | |
| - 1 corresponds to a *sentence B* token. | |
| [What are token type IDs?](../glossary#token-type-ids) | |
| - **position_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.n_positions - 1]`. | |
| [What are position IDs?](../glossary#position-ids) | |
| - **inputs_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) -- | |
| Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This | |
| is useful if you want more control over how to convert `input_ids` indices into associated vectors than the | |
| model's internal embedding lookup matrix. | |
| - **encoder_hidden_states** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) -- | |
| Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention | |
| if the model is configured as a decoder. | |
| - **encoder_attention_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in | |
| the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| - **labels** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., | |
| config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the | |
| loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`0[MaskedLMOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or `tuple(torch.FloatTensor)`A [MaskedLMOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| The [Data2VecTextForMaskedLM](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForMaskedLM) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Masked language modeling (MLM) loss. | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) -- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example: | |
| ```python | |
| >>> from transformers import AutoTokenizer, Data2VecTextForMaskedLM | |
| >>> import torch | |
| >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") | |
| >>> model = Data2VecTextForMaskedLM.from_pretrained("facebook/data2vec-text-base") | |
| >>> inputs = tokenizer("The capital of France is .", return_tensors="pt") | |
| >>> with torch.no_grad(): | |
| ... logits = model(**inputs).logits | |
| >>> # retrieve index of | |
| >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] | |
| >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) | |
| >>> tokenizer.decode(predicted_token_id) | |
| ... | |
| >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] | |
| >>> # mask labels of non- tokens | |
| >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) | |
| >>> outputs = model(**inputs, labels=labels) | |
| >>> round(outputs.loss.item(), 2) | |
| ... | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecTextForMaskedLM](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForMaskedLM)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[MaskedLMOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or `tuple(torch.FloatTensor)`` | |
| A [MaskedLMOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.MaskedLMOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| ## Data2VecTextForSequenceClassification[[transformers.Data2VecTextForSequenceClassification]] | |
| #### transformers.Data2VecTextForSequenceClassification[[transformers.Data2VecTextForSequenceClassification]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L898) | |
| Data2VecText Model transformer with a sequence classification/regression head on top (a linear layer on top of the | |
| pooled output) e.g. for GLUE tasks. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecTextForSequenceClassification.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L910[{"name": "input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "attention_mask", "val": ": torch.FloatTensor | None = None"}, {"name": "token_type_ids", "val": ": torch.LongTensor | None = None"}, {"name": "position_ids", "val": ": torch.LongTensor | None = None"}, {"name": "inputs_embeds", "val": ": torch.FloatTensor | None = None"}, {"name": "labels", "val": ": torch.LongTensor | None = None"}, {"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs]"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of input sequence tokens in the vocabulary. Padding will be ignored by default. | |
| Indices can be obtained using [AutoTokenizer](/docs/transformers/main/ja/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and | |
| [PreTrainedTokenizer.__call__()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details. | |
| [What are input IDs?](../glossary#input-ids) | |
| - **attention_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **token_type_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: | |
| - 0 corresponds to a *sentence A* token, | |
| - 1 corresponds to a *sentence B* token. | |
| [What are token type IDs?](../glossary#token-type-ids) | |
| - **position_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.n_positions - 1]`. | |
| [What are position IDs?](../glossary#position-ids) | |
| - **inputs_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) -- | |
| Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This | |
| is useful if you want more control over how to convert `input_ids` indices into associated vectors than the | |
| model's internal embedding lookup matrix. | |
| - **labels** (`torch.LongTensor` of shape `(batch_size,)`, *optional*) -- | |
| Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., | |
| config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If | |
| `config.num_labels > 1` a classification loss is computed (Cross-Entropy).0[SequenceClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)`A [SequenceClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| The [Data2VecTextForSequenceClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForSequenceClassification) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Classification (or regression if config.num_labels==1) loss. | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`) -- Classification (or regression if config.num_labels==1) scores (before SoftMax). | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example of single-label classification: | |
| ```python | |
| >>> import torch | |
| >>> from transformers import AutoTokenizer, Data2VecTextForSequenceClassification | |
| >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") | |
| >>> model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base") | |
| >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") | |
| >>> with torch.no_grad(): | |
| ... logits = model(**inputs).logits | |
| >>> predicted_class_id = logits.argmax().item() | |
| >>> model.config.id2label[predicted_class_id] | |
| ... | |
| >>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)` | |
| >>> num_labels = len(model.config.id2label) | |
| >>> model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base", num_labels=num_labels) | |
| >>> labels = torch.tensor([1]) | |
| >>> loss = model(**inputs, labels=labels).loss | |
| >>> round(loss.item(), 2) | |
| ... | |
| ``` | |
| Example of multi-label classification: | |
| ```python | |
| >>> import torch | |
| >>> from transformers import AutoTokenizer, Data2VecTextForSequenceClassification | |
| >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") | |
| >>> model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base", problem_type="multi_label_classification") | |
| >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") | |
| >>> with torch.no_grad(): | |
| ... logits = model(**inputs).logits | |
| >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] | |
| >>> # To train a model on `num_labels` classes, you can pass `num_labels=num_labels` to `.from_pretrained(...)` | |
| >>> num_labels = len(model.config.id2label) | |
| >>> model = Data2VecTextForSequenceClassification.from_pretrained( | |
| ... "facebook/data2vec-text-base", num_labels=num_labels, problem_type="multi_label_classification" | |
| ... ) | |
| >>> labels = torch.sum( | |
| ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 | |
| ... ).to(torch.float) | |
| >>> loss = model(**inputs, labels=labels).loss | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecTextForSequenceClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForSequenceClassification)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[SequenceClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or `tuple(torch.FloatTensor)`` | |
| A [SequenceClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SequenceClassifierOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| ## Data2VecTextForMultipleChoice[[transformers.Data2VecTextForMultipleChoice]] | |
| #### transformers.Data2VecTextForMultipleChoice[[transformers.Data2VecTextForMultipleChoice]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L974) | |
| The Data2Vec Text Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a | |
| softmax) e.g. for RocStories/SWAG tasks. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecTextForMultipleChoice.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L985[{"name": "input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "token_type_ids", "val": ": torch.LongTensor | None = None"}, {"name": "attention_mask", "val": ": torch.FloatTensor | None = None"}, {"name": "labels", "val": ": torch.LongTensor | None = None"}, {"name": "position_ids", "val": ": torch.LongTensor | None = None"}, {"name": "inputs_embeds", "val": ": torch.FloatTensor | None = None"}, {"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs]"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`) -- | |
| Indices of input sequence tokens in the vocabulary. | |
| Indices can be obtained using [AutoTokenizer](/docs/transformers/main/ja/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and | |
| [PreTrainedTokenizer.__call__()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details. | |
| [What are input IDs?](../glossary#input-ids) | |
| - **token_type_ids** (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`, *optional*) -- | |
| Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, | |
| 1]`: | |
| - 0 corresponds to a *sentence A* token, | |
| - 1 corresponds to a *sentence B* token. | |
| [What are token type IDs?](../glossary#token-type-ids) | |
| - **attention_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **labels** (`torch.LongTensor` of shape `(batch_size,)`, *optional*) -- | |
| Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., | |
| num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See | |
| `input_ids` above) | |
| - **position_ids** (`torch.LongTensor` of shape `(batch_size, num_choices, sequence_length)`, *optional*) -- | |
| Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, | |
| config.max_position_embeddings - 1]`. | |
| [What are position IDs?](../glossary#position-ids) | |
| - **inputs_embeds** (`torch.FloatTensor` of shape `(batch_size, num_choices, sequence_length, hidden_size)`, *optional*) -- | |
| Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This | |
| is useful if you want more control over how to convert `input_ids` indices into associated vectors than the | |
| model's internal embedding lookup matrix.0[MultipleChoiceModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput) or `tuple(torch.FloatTensor)`A [MultipleChoiceModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| The [Data2VecTextForMultipleChoice](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForMultipleChoice) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape *(1,)*, *optional*, returned when `labels` is provided) -- Classification loss. | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, num_choices)`) -- *num_choices* is the second dimension of the input tensors. (see *input_ids* above). | |
| Classification scores (before SoftMax). | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example: | |
| ```python | |
| >>> from transformers import AutoTokenizer, Data2VecTextForMultipleChoice | |
| >>> import torch | |
| >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") | |
| >>> model = Data2VecTextForMultipleChoice.from_pretrained("facebook/data2vec-text-base") | |
| >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." | |
| >>> choice0 = "It is eaten with a fork and a knife." | |
| >>> choice1 = "It is eaten while held in the hand." | |
| >>> labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1 | |
| >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) | |
| >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) # batch size is 1 | |
| >>> # the linear classifier still needs to be trained | |
| >>> loss = outputs.loss | |
| >>> logits = outputs.logits | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecTextForMultipleChoice](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForMultipleChoice)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[MultipleChoiceModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput) or `tuple(torch.FloatTensor)`` | |
| A [MultipleChoiceModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.MultipleChoiceModelOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| ## Data2VecTextForTokenClassification[[transformers.Data2VecTextForTokenClassification]] | |
| #### transformers.Data2VecTextForTokenClassification[[transformers.Data2VecTextForTokenClassification]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L1070) | |
| The Data2Vec Text transformer with a token classification head on top (a linear layer on top of the hidden-states | |
| output) e.g. for Named-Entity-Recognition (NER) tasks. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecTextForTokenClassification.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L1085[{"name": "input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "attention_mask", "val": ": torch.FloatTensor | None = None"}, {"name": "token_type_ids", "val": ": torch.LongTensor | None = None"}, {"name": "position_ids", "val": ": torch.LongTensor | None = None"}, {"name": "inputs_embeds", "val": ": torch.FloatTensor | None = None"}, {"name": "labels", "val": ": torch.LongTensor | None = None"}, {"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs]"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of input sequence tokens in the vocabulary. Padding will be ignored by default. | |
| Indices can be obtained using [AutoTokenizer](/docs/transformers/main/ja/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and | |
| [PreTrainedTokenizer.__call__()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details. | |
| [What are input IDs?](../glossary#input-ids) | |
| - **attention_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **token_type_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: | |
| - 0 corresponds to a *sentence A* token, | |
| - 1 corresponds to a *sentence B* token. | |
| [What are token type IDs?](../glossary#token-type-ids) | |
| - **position_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.n_positions - 1]`. | |
| [What are position IDs?](../glossary#position-ids) | |
| - **inputs_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) -- | |
| Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This | |
| is useful if you want more control over how to convert `input_ids` indices into associated vectors than the | |
| model's internal embedding lookup matrix. | |
| - **labels** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`.0[TokenClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)`A [TokenClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| The [Data2VecTextForTokenClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForTokenClassification) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Classification loss. | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.num_labels)`) -- Classification scores (before SoftMax). | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example: | |
| ```python | |
| >>> from transformers import AutoTokenizer, Data2VecTextForTokenClassification | |
| >>> import torch | |
| >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") | |
| >>> model = Data2VecTextForTokenClassification.from_pretrained("facebook/data2vec-text-base") | |
| >>> inputs = tokenizer( | |
| ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" | |
| ... ) | |
| >>> with torch.no_grad(): | |
| ... logits = model(**inputs).logits | |
| >>> predicted_token_class_ids = logits.argmax(-1) | |
| >>> # Note that tokens are classified rather then input words which means that | |
| >>> # there might be more predicted token classes than words. | |
| >>> # Multiple token classes might account for the same word | |
| >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] | |
| >>> predicted_tokens_classes | |
| ... | |
| >>> labels = predicted_token_class_ids | |
| >>> loss = model(**inputs, labels=labels).loss | |
| >>> round(loss.item(), 2) | |
| ... | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecTextForTokenClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForTokenClassification)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[TokenClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or `tuple(torch.FloatTensor)`` | |
| A [TokenClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.TokenClassifierOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| ## Data2VecTextForQuestionAnswering[[transformers.Data2VecTextForQuestionAnswering]] | |
| #### transformers.Data2VecTextForQuestionAnswering[[transformers.Data2VecTextForQuestionAnswering]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L1132) | |
| The Data2Vec Text transformer with a span classification head on top for extractive question-answering tasks like | |
| SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecTextForQuestionAnswering.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_text.py#L1143[{"name": "input_ids", "val": ": torch.LongTensor | None = None"}, {"name": "attention_mask", "val": ": torch.FloatTensor | None = None"}, {"name": "token_type_ids", "val": ": torch.LongTensor | None = None"}, {"name": "position_ids", "val": ": torch.LongTensor | None = None"}, {"name": "inputs_embeds", "val": ": torch.FloatTensor | None = None"}, {"name": "start_positions", "val": ": torch.LongTensor | None = None"}, {"name": "end_positions", "val": ": torch.LongTensor | None = None"}, {"name": "**kwargs", "val": ": typing_extensions.Unpack[transformers.utils.generic.TransformersKwargs]"}]- **input_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of input sequence tokens in the vocabulary. Padding will be ignored by default. | |
| Indices can be obtained using [AutoTokenizer](/docs/transformers/main/ja/model_doc/auto#transformers.AutoTokenizer). See [PreTrainedTokenizer.encode()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.encode) and | |
| [PreTrainedTokenizer.__call__()](/docs/transformers/main/ja/internal/tokenization_utils#transformers.PreTrainedTokenizerBase.__call__) for details. | |
| [What are input IDs?](../glossary#input-ids) | |
| - **attention_mask** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: | |
| - 1 for tokens that are **not masked**, | |
| - 0 for tokens that are **masked**. | |
| [What are attention masks?](../glossary#attention-mask) | |
| - **token_type_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, 1]`: | |
| - 0 corresponds to a *sentence A* token, | |
| - 1 corresponds to a *sentence B* token. | |
| [What are token type IDs?](../glossary#token-type-ids) | |
| - **position_ids** (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*) -- | |
| Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, config.n_positions - 1]`. | |
| [What are position IDs?](../glossary#position-ids) | |
| - **inputs_embeds** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) -- | |
| Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This | |
| is useful if you want more control over how to convert `input_ids` indices into associated vectors than the | |
| model's internal embedding lookup matrix. | |
| - **start_positions** (`torch.LongTensor` of shape `(batch_size,)`, *optional*) -- | |
| Labels for position (index) of the start of the labelled span for computing the token classification loss. | |
| Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence | |
| are not taken into account for computing the loss. | |
| - **end_positions** (`torch.LongTensor` of shape `(batch_size,)`, *optional*) -- | |
| Labels for position (index) of the end of the labelled span for computing the token classification loss. | |
| Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence | |
| are not taken into account for computing the loss.0[QuestionAnsweringModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput) or `tuple(torch.FloatTensor)`A [QuestionAnsweringModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| The [Data2VecTextForQuestionAnswering](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForQuestionAnswering) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. | |
| - **start_logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`) -- Span-start scores (before SoftMax). | |
| - **end_logits** (`torch.FloatTensor` of shape `(batch_size, sequence_length)`) -- Span-end scores (before SoftMax). | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example: | |
| ```python | |
| >>> from transformers import AutoTokenizer, Data2VecTextForQuestionAnswering | |
| >>> import torch | |
| >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") | |
| >>> model = Data2VecTextForQuestionAnswering.from_pretrained("facebook/data2vec-text-base") | |
| >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" | |
| >>> inputs = tokenizer(question, text, return_tensors="pt") | |
| >>> with torch.no_grad(): | |
| ... outputs = model(**inputs) | |
| >>> answer_start_index = outputs.start_logits.argmax() | |
| >>> answer_end_index = outputs.end_logits.argmax() | |
| >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] | |
| >>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True) | |
| ... | |
| >>> # target is "nice puppet" | |
| >>> target_start_index = torch.tensor([14]) | |
| >>> target_end_index = torch.tensor([15]) | |
| >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) | |
| >>> loss = outputs.loss | |
| >>> round(loss.item(), 2) | |
| ... | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecTextForQuestionAnswering](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextForQuestionAnswering)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[QuestionAnsweringModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput) or `tuple(torch.FloatTensor)`` | |
| A [QuestionAnsweringModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.QuestionAnsweringModelOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecTextConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecTextConfig)) and inputs. | |
| ## Data2VecVisionModel[[transformers.Data2VecVisionModel]] | |
| #### transformers.Data2VecVisionModel[[transformers.Data2VecVisionModel]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py#L684) | |
| The bare Data2Vec Vision Model outputting raw hidden-states without any specific head on top. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecVisionModel.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py#L707[{"name": "pixel_values", "val": ": Tensor"}, {"name": "bool_masked_pos", "val": ": torch.BoolTensor | None = None"}, {"name": "output_attentions", "val": ": bool | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "interpolate_pos_encoding", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]- **pixel_values** (`torch.Tensor` of shape `(batch_size, num_channels, image_size, image_size)`) -- | |
| The tensors corresponding to the input images. Pixel values can be obtained using | |
| [BeitImageProcessor](/docs/transformers/main/ja/model_doc/beit#transformers.BeitImageProcessor). See `BeitImageProcessor.__call__()` for details (`processor_class` uses | |
| [BeitImageProcessor](/docs/transformers/main/ja/model_doc/beit#transformers.BeitImageProcessor) for processing images). | |
| - **bool_masked_pos** (`torch.BoolTensor` of shape `(batch_size, num_patches)`, *optional*) -- | |
| Boolean masked positions. Indicates which patches are masked (1) and which aren't (0). | |
| - **output_attentions** (`bool`, *optional*) -- | |
| Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned | |
| tensors for more detail. | |
| - **output_hidden_states** (`bool`, *optional*) -- | |
| Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for | |
| more detail. | |
| - **interpolate_pos_encoding** (`bool`, *optional*, defaults to `False`) -- | |
| Whether to interpolate the pre-trained position encodings. | |
| - **return_dict** (`bool`, *optional*) -- | |
| Whether or not to return a [ModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple.0`Data2VecVisionModelOutputWithPooling` or `tuple(torch.FloatTensor)`A `Data2VecVisionModelOutputWithPooling` or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecVisionConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionConfig)) and inputs. | |
| The [Data2VecVisionModel](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionModel) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **last_hidden_state** (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) -- Sequence of hidden-states at the output of the last layer of the model. | |
| - **pooler_output** (`torch.FloatTensor` of shape `(batch_size, hidden_size)`) -- Average of the last layer hidden states of the patch tokens (excluding the *[CLS]* token) if | |
| *config.use_mean_pooling* is set to True. If set to False, then the final hidden state of the *[CLS]* token | |
| will be returned. | |
| - **hidden_states** (`tuple[torch.FloatTensor, ...]`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, sequence_length, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple[torch.FloatTensor, ...]`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example: | |
| ```python | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecVisionConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionConfig)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| add_pooling_layer (`bool`, *optional*, defaults to `False`) : Whether to add a pooling layer | |
| **Returns:** | |
| ``Data2VecVisionModelOutputWithPooling` or `tuple(torch.FloatTensor)`` | |
| A `Data2VecVisionModelOutputWithPooling` or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecVisionConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionConfig)) and inputs. | |
| ## Data2VecVisionForImageClassification[[transformers.Data2VecVisionForImageClassification]] | |
| #### transformers.Data2VecVisionForImageClassification[[transformers.Data2VecVisionForImageClassification]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py#L775) | |
| Data2VecVision Model transformer with an image classification head on top (a linear layer on top of the average of | |
| the final hidden states of the patch tokens) e.g. for ImageNet. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecVisionForImageClassification.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py#L788[{"name": "pixel_values", "val": ": torch.Tensor | None = None"}, {"name": "labels", "val": ": torch.Tensor | None = None"}, {"name": "output_attentions", "val": ": bool | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "interpolate_pos_encoding", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]- **pixel_values** (`torch.Tensor` of shape `(batch_size, num_channels, image_size, image_size)`, *optional*) -- | |
| The tensors corresponding to the input images. Pixel values can be obtained using | |
| [BeitImageProcessor](/docs/transformers/main/ja/model_doc/beit#transformers.BeitImageProcessor). See `BeitImageProcessor.__call__()` for details (`processor_class` uses | |
| [BeitImageProcessor](/docs/transformers/main/ja/model_doc/beit#transformers.BeitImageProcessor) for processing images). | |
| - **labels** (`torch.LongTensor` of shape `(batch_size,)`, *optional*) -- | |
| Labels for computing the image classification/regression loss. Indices should be in `[0, ..., | |
| config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If | |
| `config.num_labels > 1` a classification loss is computed (Cross-Entropy). | |
| - **output_attentions** (`bool`, *optional*) -- | |
| Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned | |
| tensors for more detail. | |
| - **output_hidden_states** (`bool`, *optional*) -- | |
| Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for | |
| more detail. | |
| - **interpolate_pos_encoding** (`bool`, *optional*, defaults to `False`) -- | |
| Whether to interpolate the pre-trained position encodings. | |
| - **return_dict** (`bool`, *optional*) -- | |
| Whether or not to return a [ModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple.0[ImageClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.ImageClassifierOutput) or `tuple(torch.FloatTensor)`A [ImageClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.ImageClassifierOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecVisionConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionConfig)) and inputs. | |
| The [Data2VecVisionForImageClassification](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionForImageClassification) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Classification (or regression if config.num_labels==1) loss. | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, config.num_labels)`) -- Classification (or regression if config.num_labels==1) scores (before SoftMax). | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states | |
| (also called feature maps) of the model at the output of each stage. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Example: | |
| ```python | |
| >>> from transformers import AutoImageProcessor, Data2VecVisionForImageClassification | |
| >>> import torch | |
| >>> from datasets import load_dataset | |
| >>> dataset = load_dataset("huggingface/cats-image") | |
| >>> image = dataset["test"]["image"][0] | |
| >>> image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base") | |
| >>> model = Data2VecVisionForImageClassification.from_pretrained("facebook/data2vec-vision-base") | |
| >>> inputs = image_processor(image, return_tensors="pt") | |
| >>> with torch.no_grad(): | |
| ... logits = model(**inputs).logits | |
| >>> # model predicts one of the 1000 ImageNet classes | |
| >>> predicted_label = logits.argmax(-1).item() | |
| >>> print(model.config.id2label[predicted_label]) | |
| ... | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecVisionConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionConfig)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[ImageClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.ImageClassifierOutput) or `tuple(torch.FloatTensor)`` | |
| A [ImageClassifierOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.ImageClassifierOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecVisionConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionConfig)) and inputs. | |
| ## Data2VecVisionForSemanticSegmentation[[transformers.Data2VecVisionForSemanticSegmentation]] | |
| #### transformers.Data2VecVisionForSemanticSegmentation[[transformers.Data2VecVisionForSemanticSegmentation]] | |
| [Source](https://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py#L1064) | |
| The Data2Vec Vision Model with a semantic segmentation head on top e.g. for ADE20K, CityScapes. | |
| This model inherits from [PreTrainedModel](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel). Check the superclass documentation for the generic methods the | |
| library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads | |
| etc.) | |
| This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. | |
| Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage | |
| and behavior. | |
| forwardtransformers.Data2VecVisionForSemanticSegmentation.forwardhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/data2vec/modeling_data2vec_vision.py#L1116[{"name": "pixel_values", "val": ": torch.Tensor | None = None"}, {"name": "labels", "val": ": torch.Tensor | None = None"}, {"name": "output_attentions", "val": ": bool | None = None"}, {"name": "output_hidden_states", "val": ": bool | None = None"}, {"name": "interpolate_pos_encoding", "val": ": bool = False"}, {"name": "return_dict", "val": ": bool | None = None"}, {"name": "**kwargs", "val": ""}]- **pixel_values** (`torch.Tensor` of shape `(batch_size, num_channels, image_size, image_size)`, *optional*) -- | |
| The tensors corresponding to the input images. Pixel values can be obtained using | |
| [BeitImageProcessor](/docs/transformers/main/ja/model_doc/beit#transformers.BeitImageProcessor). See `BeitImageProcessor.__call__()` for details (`processor_class` uses | |
| [BeitImageProcessor](/docs/transformers/main/ja/model_doc/beit#transformers.BeitImageProcessor) for processing images). | |
| - **labels** (`torch.LongTensor` of shape `(batch_size, height, width)`, *optional*) -- | |
| Ground truth semantic segmentation maps for computing the loss. Indices should be in `[0, ..., | |
| config.num_labels - 1]`. If `config.num_labels > 1`, a classification loss is computed (Cross-Entropy). | |
| - **output_attentions** (`bool`, *optional*) -- | |
| Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned | |
| tensors for more detail. | |
| - **output_hidden_states** (`bool`, *optional*) -- | |
| Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for | |
| more detail. | |
| - **interpolate_pos_encoding** (`bool`, *optional*, defaults to `False`) -- | |
| Whether to interpolate the pre-trained position encodings. | |
| - **return_dict** (`bool`, *optional*) -- | |
| Whether or not to return a [ModelOutput](/docs/transformers/main/ja/main_classes/output#transformers.utils.ModelOutput) instead of a plain tuple.0[SemanticSegmenterOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SemanticSegmenterOutput) or `tuple(torch.FloatTensor)`A [SemanticSegmenterOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SemanticSegmenterOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecVisionConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionConfig)) and inputs. | |
| The [Data2VecVisionForSemanticSegmentation](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionForSemanticSegmentation) forward method, overrides the `__call__` special method. | |
| Although the recipe for forward pass needs to be defined within this function, one should call the `Module` | |
| instance afterwards instead of this since the former takes care of running the pre and post processing steps while | |
| the latter silently ignores them. | |
| - **loss** (`torch.FloatTensor` of shape `(1,)`, *optional*, returned when `labels` is provided) -- Classification (or regression if config.num_labels==1) loss. | |
| - **logits** (`torch.FloatTensor` of shape `(batch_size, config.num_labels, logits_height, logits_width)`) -- Classification scores for each pixel. | |
| The logits returned do not necessarily have the same size as the `pixel_values` passed as inputs. This is | |
| to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the | |
| original image size as post-processing. You should always check your logits shape and resize as needed. | |
| - **hidden_states** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`) -- Tuple of `torch.FloatTensor` (one for the output of the embeddings, if the model has an embedding layer, + | |
| one for the output of each layer) of shape `(batch_size, patch_size, hidden_size)`. | |
| Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. | |
| - **attentions** (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`) -- Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, patch_size, | |
| sequence_length)`. | |
| Attentions weights after the attention softmax, used to compute the weighted average in the self-attention | |
| heads. | |
| Examples: | |
| ```python | |
| >>> from transformers import AutoImageProcessor, Data2VecVisionForSemanticSegmentation | |
| >>> from PIL import Image | |
| >>> import httpx | |
| >>> from io import BytesIO | |
| >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" | |
| >>> with httpx.stream("GET", url) as response: | |
| ... image = Image.open(BytesIO(response.read())) | |
| >>> image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base") | |
| >>> model = Data2VecVisionForSemanticSegmentation.from_pretrained("facebook/data2vec-vision-base") | |
| >>> inputs = image_processor(images=image, return_tensors="pt") | |
| >>> outputs = model(**inputs) | |
| >>> # logits are of shape (batch_size, num_labels, height, width) | |
| >>> logits = outputs.logits | |
| ``` | |
| **Parameters:** | |
| config ([Data2VecVisionConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionConfig)) : Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [from_pretrained()](/docs/transformers/main/ja/main_classes/model#transformers.PreTrainedModel.from_pretrained) method to load the model weights. | |
| **Returns:** | |
| `[SemanticSegmenterOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SemanticSegmenterOutput) or `tuple(torch.FloatTensor)`` | |
| A [SemanticSegmenterOutput](/docs/transformers/main/ja/main_classes/output#transformers.modeling_outputs.SemanticSegmenterOutput) or a tuple of | |
| `torch.FloatTensor` (if `return_dict=False` is passed or when `config.return_dict=False`) comprising various | |
| elements depending on the configuration ([Data2VecVisionConfig](/docs/transformers/main/ja/model_doc/data2vec#transformers.Data2VecVisionConfig)) and inputs. | |
Xet Storage Details
- Size:
- 150 kB
- Xet hash:
- e5d5aecd9f318fc0038ae9123cef96b51f7ddcd8f6b1b5f8e36940bdef7f597f
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.