source
stringclasses
470 values
url
stringlengths
49
167
file_type
stringclasses
1 value
chunk
stringlengths
1
512
chunk_id
stringlengths
5
9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights from Detr's decoder after the attention softmax, used to compute the weighted average in the self-attention heads. models.maskformer.modeling_maskformer.MaskFormerForInstanceSegmentationOutput Class for outputs of [`MaskFormerForInstanceSegmentation`]. This output can be directly passed to [`~MaskFormerImageProcessor.post_process_semantic_segmentation`] or or
156_5_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
This output can be directly passed to [`~MaskFormerImageProcessor.post_process_semantic_segmentation`] or or [`~MaskFormerImageProcessor.post_process_instance_segmentation`] or [`~MaskFormerImageProcessor.post_process_panoptic_segmentation`] depending on the task. Please, see [`~MaskFormerImageProcessor] for details regarding usage. Args: loss (`torch.Tensor`, *optional*): The computed loss, returned when labels are present. class_queries_logits (`torch.FloatTensor`):
156_5_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
The computed loss, returned when labels are present. class_queries_logits (`torch.FloatTensor`): A tensor of shape `(batch_size, num_queries, num_labels + 1)` representing the proposed classes for each query. Note the `+ 1` is needed because we incorporate the null class. masks_queries_logits (`torch.FloatTensor`): A tensor of shape `(batch_size, num_queries, height, width)` representing the proposed masks for each query.
156_5_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
A tensor of shape `(batch_size, num_queries, height, width)` representing the proposed masks for each query. encoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): Last hidden states (final feature map) of the last stage of the encoder model (backbone). pixel_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN).
156_5_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
Last hidden states (final feature map) of the last stage of the pixel decoder model (FPN). transformer_decoder_last_hidden_state (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`): Last hidden states (final feature map) of the last stage of the transformer decoder model. encoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
156_5_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the encoder model at the output of each stage. pixel_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
156_5_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, num_channels, height, width)`. Hidden-states (also called feature maps) of the pixel decoder model at the output of each stage. transformer_decoder_hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`):
156_5_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each stage) of shape `(batch_size, sequence_length, hidden_size)`. Hidden-states of the transformer decoder at the output of each stage. hidden_states `tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `torch.FloatTensor` containing `encoder_hidden_states`, `pixel_decoder_hidden_states` and `decoder_hidden_states`.
156_5_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
Tuple of `torch.FloatTensor` containing `encoder_hidden_states`, `pixel_decoder_hidden_states` and `decoder_hidden_states`. attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights from Detr's decoder after the attention softmax, used to compute the
156_5_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformer-specific-outputs
.md
sequence_length)`. Attentions weights from Detr's decoder after the attention softmax, used to compute the weighted average in the self-attention heads.
156_5_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerconfig
.md
This is the configuration class to store the configuration of a [`MaskFormerModel`]. It is used to instantiate a MaskFormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the MaskFormer [facebook/maskformer-swin-base-ade](https://huggingface.co/facebook/maskformer-swin-base-ade) architecture trained on [ADE20k-150](https://huggingface.co/datasets/scene_parse_150).
156_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerconfig
.md
on [ADE20k-150](https://huggingface.co/datasets/scene_parse_150). Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Currently, MaskFormer only supports the [Swin Transformer](swin) as backbone. Args: mask_feature_size (`int`, *optional*, defaults to 256): The masks' features size, this value will also be used to specify the Feature Pyramid Network features' size.
156_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerconfig
.md
The masks' features size, this value will also be used to specify the Feature Pyramid Network features' size. no_object_weight (`float`, *optional*, defaults to 0.1): Weight to apply to the null (no object) class. use_auxiliary_loss(`bool`, *optional*, defaults to `False`): If `True` [`MaskFormerForInstanceSegmentationOutput`] will contain the auxiliary losses computed using the logits from each decoder's stage. backbone_config (`Dict`, *optional*):
156_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerconfig
.md
logits from each decoder's stage. backbone_config (`Dict`, *optional*): The configuration passed to the backbone, if unset, the configuration corresponding to `swin-base-patch4-window12-384` will be used. backbone (`str`, *optional*): Name of backbone to use when `backbone_config` is `None`. If `use_pretrained_backbone` is `True`, this will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone`
156_6_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerconfig
.md
will load the corresponding pretrained weights from the timm or transformers library. If `use_pretrained_backbone` is `False`, this loads the backbone's config and uses that to initialize the backbone with random weights. use_pretrained_backbone (`bool`, *optional*, `False`): Whether to use pretrained weights for the backbone. use_timm_backbone (`bool`, *optional*, `False`): Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers library.
156_6_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerconfig
.md
Whether to load `backbone` from the timm library. If `False`, the backbone is loaded from the transformers library. backbone_kwargs (`dict`, *optional*): Keyword arguments to be passed to AutoBackbone when loading from a checkpoint e.g. `{'out_indices': (0, 1, 2, 3)}`. Cannot be specified if `backbone_config` is set. decoder_config (`Dict`, *optional*): The configuration passed to the transformer decoder model, if unset the base config for `detr-resnet-50` will be used.
156_6_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerconfig
.md
The configuration passed to the transformer decoder model, if unset the base config for `detr-resnet-50` will be used. init_std (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. init_xavier_std (`float`, *optional*, defaults to 1): The scaling factor used for the Xavier initialization gain in the HM Attention map module. dice_weight (`float`, *optional*, defaults to 1.0): The weight for the dice loss.
156_6_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerconfig
.md
dice_weight (`float`, *optional*, defaults to 1.0): The weight for the dice loss. cross_entropy_weight (`float`, *optional*, defaults to 1.0): The weight for the cross entropy loss. mask_weight (`float`, *optional*, defaults to 20.0): The weight for the mask loss. output_auxiliary_logits (`bool`, *optional*): Should the model output its `auxiliary_logits` or not. Raises: `ValueError`: Raised if the backbone model type selected is not in `["swin"]` or the decoder model type selected is not in `["detr"]`
156_6_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerconfig
.md
Raised if the backbone model type selected is not in `["swin"]` or the decoder model type selected is not in `["detr"]` Examples: ```python >>> from transformers import MaskFormerConfig, MaskFormerModel
156_6_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerconfig
.md
>>> # Initializing a MaskFormer facebook/maskformer-swin-base-ade configuration >>> configuration = MaskFormerConfig() >>> # Initializing a model (with random weights) from the facebook/maskformer-swin-base-ade style configuration >>> model = MaskFormerModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
156_6_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerimageprocessor
.md
Constructs a MaskFormer image processor. The image processor can be used to prepare image(s) and optional targets for the model. This image processor inherits from [`BaseImageProcessor`] which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Args: do_resize (`bool`, *optional*, defaults to `True`): Whether to resize the input to a certain `size`. size (`int`, *optional*, defaults to 800):
156_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerimageprocessor
.md
Whether to resize the input to a certain `size`. size (`int`, *optional*, defaults to 800): Resize the input to the given size. Only has an effect if `do_resize` is set to `True`. If size is a sequence like `(width, height)`, output size will be matched to this. If size is an int, smaller edge of the image will be matched to this number. i.e, if `height > width`, then image will be rescaled to `(size * height / width, size)`. size_divisor (`int`, *optional*, defaults to 32):
156_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerimageprocessor
.md
height / width, size)`. size_divisor (`int`, *optional*, defaults to 32): Some backbones need images divisible by a certain number. If not passed, it defaults to the value used in Swin Transformer. resample (`int`, *optional*, defaults to `Resampling.BILINEAR`): An optional resampling filter. This can be one of `PIL.Image.Resampling.NEAREST`, `PIL.Image.Resampling.BOX`, `PIL.Image.Resampling.BILINEAR`, `PIL.Image.Resampling.HAMMING`,
156_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerimageprocessor
.md
`PIL.Image.Resampling.BOX`, `PIL.Image.Resampling.BILINEAR`, `PIL.Image.Resampling.HAMMING`, `PIL.Image.Resampling.BICUBIC` or `PIL.Image.Resampling.LANCZOS`. Only has an effect if `do_resize` is set to `True`. do_rescale (`bool`, *optional*, defaults to `True`): Whether to rescale the input to a certain `scale`. rescale_factor (`float`, *optional*, defaults to `1/ 255`): Rescale the input by the given factor. Only has an effect if `do_rescale` is set to `True`.
156_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerimageprocessor
.md
Rescale the input by the given factor. Only has an effect if `do_rescale` is set to `True`. do_normalize (`bool`, *optional*, defaults to `True`): Whether or not to normalize the input with mean and standard deviation. image_mean (`int`, *optional*, defaults to `[0.485, 0.456, 0.406]`): The sequence of means for each channel, to be used when normalizing images. Defaults to the ImageNet mean. image_std (`int`, *optional*, defaults to `[0.229, 0.224, 0.225]`):
156_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerimageprocessor
.md
image_std (`int`, *optional*, defaults to `[0.229, 0.224, 0.225]`): The sequence of standard deviations for each channel, to be used when normalizing images. Defaults to the ImageNet std. ignore_index (`int`, *optional*): Label to be assigned to background pixels in segmentation maps. If provided, segmentation map pixels denoted with 0 (background) will be replaced with `ignore_index`. do_reduce_labels (`bool`, *optional*, defaults to `False`):
156_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerimageprocessor
.md
denoted with 0 (background) will be replaced with `ignore_index`. do_reduce_labels (`bool`, *optional*, defaults to `False`): Whether or not to decrement all label values of segmentation maps by 1. Usually used for datasets where 0 is used for background, and background itself is not included in all classes of a dataset (e.g. ADE20k). The background label will be replaced by `ignore_index`. num_labels (`int`, *optional*): The number of labels in the segmentation map. Methods: preprocess - encode_inputs
156_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerimageprocessor
.md
num_labels (`int`, *optional*): The number of labels in the segmentation map. Methods: preprocess - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation
156_7_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerfeatureextractor
.md
No docstring available for MaskFormerFeatureExtractor Methods: __call__ - encode_inputs - post_process_semantic_segmentation - post_process_instance_segmentation - post_process_panoptic_segmentation
156_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformermodel
.md
The bare MaskFormer Model outputting raw hidden-states without any specific head on top. This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`MaskFormerConfig`]): Model configuration class with all the parameters of the model.
156_9_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformermodel
.md
behavior. Parameters: config ([`MaskFormerConfig`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
156_9_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/maskformer.md
https://huggingface.co/docs/transformers/en/model_doc/maskformer/#maskformerforinstancesegmentation
.md
No docstring available for MaskFormerForInstanceSegmentation Methods: forward
156_10_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/
.md
<!--Copyright 2023 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
157_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
157_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#overview
.md
The KOSMOS-2 model was proposed in [Kosmos-2: Grounding Multimodal Large Language Models to the World](https://arxiv.org/abs/2306.14824) by Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei. KOSMOS-2 is a Transformer-based causal language model and is trained using the next-word prediction task on a web-scale dataset of grounded image-text pairs [GRIT](https://huggingface.co/datasets/zzliang/GRIT). The spatial coordinates of
157_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#overview
.md
dataset of grounded image-text pairs [GRIT](https://huggingface.co/datasets/zzliang/GRIT). The spatial coordinates of the bounding boxes in the dataset are converted to a sequence of location tokens, which are appended to their respective entity text spans (for example, `a snowman` followed by `<patch_index_0044><patch_index_0863>`). The data format is similar to β€œhyperlinks” that connect the object regions in an image to their text span in the corresponding caption.
157_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#overview
.md
similar to β€œhyperlinks” that connect the object regions in an image to their text span in the corresponding caption. The abstract from the paper is the following:
157_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#overview
.md
*We introduce Kosmos-2, a Multimodal Large Language Model (MLLM), enabling new capabilities of perceiving object descriptions (e.g., bounding boxes) and grounding text to the visual world. Specifically, we represent refer expressions as links in Markdown, i.e., ``[text span](bounding boxes)'', where object descriptions are sequences of location tokens. Together with multimodal corpora, we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing
157_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#overview
.md
we construct large-scale data of grounded image-text pairs (called GrIT) to train the model. In addition to the existing capabilities of MLLMs (e.g., perceiving general modalities, following instructions, and performing in-context learning), Kosmos-2 integrates the grounding capability into downstream applications. We evaluate Kosmos-2 on a wide range of tasks, including (i) multimodal grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring
157_1_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#overview
.md
grounding, such as referring expression comprehension, and phrase grounding, (ii) multimodal referring, such as referring expression generation, (iii) perception-language tasks, and (iv) language understanding and generation. This work lays out the foundation for the development of Embodiment AI and sheds light on the big convergence of language, multimodal perception, action, and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at
157_1_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#overview
.md
and world modeling, which is a key step toward artificial general intelligence. Code and pretrained models are available at https://aka.ms/kosmos-2.*
157_1_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#overview
.md
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/kosmos_2_overview.jpg" alt="drawing" width="600"/> <small> Overview of tasks that KOSMOS-2 can handle. Taken from the <a href="https://arxiv.org/abs/2306.14824">original paper</a>. </small>
157_1_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#example
.md
```python >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, Kosmos2ForConditionalGeneration >>> model = Kosmos2ForConditionalGeneration.from_pretrained("microsoft/kosmos-2-patch14-224") >>> processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224") >>> url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> prompt = "<grounding> An image of"
157_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#example
.md
>>> prompt = "<grounding> An image of" >>> inputs = processor(text=prompt, images=image, return_tensors="pt")
157_2_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#example
.md
>>> generated_ids = model.generate( ... pixel_values=inputs["pixel_values"], ... input_ids=inputs["input_ids"], ... attention_mask=inputs["attention_mask"], ... image_embeds=None, ... image_embeds_position_mask=inputs["image_embeds_position_mask"], ... use_cache=True, ... max_new_tokens=64, ... ) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
157_2_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#example
.md
... max_new_tokens=64, ... ) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> processed_text = processor.post_process_generation(generated_text, cleanup_and_extract=False) >>> processed_text '<grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> warming himself by<phrase> a fire</phrase><object><patch_index_0005><patch_index_0911></object>.'
157_2_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#example
.md
>>> caption, entities = processor.post_process_generation(generated_text) >>> caption 'An image of a snowman warming himself by a fire.' >>> entities [('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a fire', (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)])] ``` This model was contributed by [Yih-Dar SHIEH](https://huggingface.co/ydshieh). The original code can be found [here](https://github.com/microsoft/unilm/tree/master/kosmos-2).
157_2_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2config
.md
This is the configuration class to store the configuration of a [`Kosmos2Model`]. It is used to instantiate a KOSMOS-2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the KOSMOS-2 [microsoft/kosmos-2-patch14-224](https://huggingface.co/microsoft/kosmos-2-patch14-224) architecture. Args: text_config (`dict`, *optional*):
157_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2config
.md
Args: text_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`Kosmos2TextConfig`]. vision_config (`dict`, *optional*): Dictionary of configuration options used to initialize [`Kosmos2VisionConfig`]. latent_query_num (`int`, *optional*, defaults to 64): The number of latent query tokens that represent the image features used in the text decoder component. kwargs (*optional*): Dictionary of keyword arguments. Example: ```python
157_3_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2config
.md
kwargs (*optional*): Dictionary of keyword arguments. Example: ```python >>> from transformers import Kosmos2Config, Kosmos2Model
157_3_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2config
.md
>>> # Initializing a Kosmos-2 kosmos-2-patch14-224 style configuration >>> configuration = Kosmos2Config() >>> # Initializing a model (with random weights) from the kosmos-2-patch14-224 style configuration >>> model = Kosmos2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config ```
157_3_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2processor
.md
Constructs an KOSMOS-2 processor which wraps a KOSMOS-2 image processor and a KOSMOS-2 tokenizer into a single processor. [`Kosmos2Processor`] offers all the functionalities of [`CLIPImageProcessor`] and some functionalities of [`XLMRobertaTokenizerFast`]. See the docstring of [`~Kosmos2Processor.__call__`] and [`~Kosmos2Processor.decode`] for more information. Args: image_processor (`CLIPImageProcessor`): An instance of [`CLIPImageProcessor`]. The image processor is a required input.
157_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2processor
.md
Args: image_processor (`CLIPImageProcessor`): An instance of [`CLIPImageProcessor`]. The image processor is a required input. tokenizer (`XLMRobertaTokenizerFast`): An instance of ['XLMRobertaTokenizerFast`]. The tokenizer is a required input. num_patch_index_tokens (`int`, *optional*, defaults to 1024): The number of tokens that represent patch indices. Methods: __call__
157_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2model
.md
KOSMOS-2 Model for generating text and image features. The model consists of a vision encoder and a language model. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
157_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2model
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Kosmos2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
157_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2model
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
157_5_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2forconditionalgeneration
.md
KOSMOS-2 Model for generating text and bounding boxes given an image. The model consists of a vision encoder and a language model. This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
157_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2forconditionalgeneration
.md
etc.) This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config ([`Kosmos2Config`]): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the
157_6_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/kosmos-2.md
https://huggingface.co/docs/transformers/en/model_doc/kosmos-2/#kosmos2forconditionalgeneration
.md
Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. Methods: forward
157_6_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/
.md
<!--Copyright 2021 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
158_0_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/
.md
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer. -->
158_0_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#overview
.md
The Wav2Vec2 model was proposed in [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations](https://arxiv.org/abs/2006.11477) by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli. The abstract from the paper is the following: *We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks
158_1_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#overview
.md
transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state
158_1_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#overview
.md
clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.* This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten).
158_1_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#overview
.md
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert) - it's pretrained on 4.5M hours of audio. We especially recommend using it for fine-tuning tasks, e.g. as per [this guide](https://huggingface.co/blog/fine-tune-w2v2-bert).
158_1_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#usage-tips
.md
- Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be decoded using [`Wav2Vec2CTCTokenizer`].
158_2_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#using-flash-attention-2
.md
Flash Attention 2 is an faster, optimized version of the model.
158_3_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#installation
.md
First, check whether your hardware is compatible with Flash Attention 2. The latest list of compatible hardware can be found in the [official documentation](https://github.com/Dao-AILab/flash-attention#installation-and-features). If your hardware is not compatible with Flash Attention 2, you can still benefit from attention kernel optimisations through Better Transformer support covered [above](https://huggingface.co/docs/transformers/main/en/model_doc/bark#using-better-transformer).
158_4_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#installation
.md
Next, [install](https://github.com/Dao-AILab/flash-attention#installation-and-features) the latest version of Flash Attention 2: ```bash pip install -U flash-attn --no-build-isolation ```
158_4_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#usage
.md
To load a model using Flash Attention 2, we can pass the argument `attn_implementation="flash_attention_2"` to [`.from_pretrained`](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). We'll also load the model in half-precision (e.g. `torch.float16`), since it results in almost no degradation to audio quality but significantly lower memory usage and faster inference: ```python >>> from transformers import Wav2Vec2Model
158_5_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#usage
.md
model = Wav2Vec2Model.from_pretrained("facebook/wav2vec2-large-960h-lv60-self", torch_dtype=torch.float16, attn_implementation="flash_attention_2").to(device) ... ```
158_5_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#expected-speedups
.md
Below is an expected speedup diagram comparing the pure inference time between the native implementation in transformers of the `facebook/wav2vec2-large-960h-lv60-self` model and the flash-attention-2 and sdpa (scale-dot-product-attention) versions. . We show the average speedup obtained on the `librispeech_asr` `clean` validation split: <div style="text-align: center"> <img src="https://huggingface.co/datasets/kamilakesbi/transformers_image_doc/resolve/main/data/Wav2Vec2_speedup.png"> </div>
158_6_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#resources
.md
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Wav2Vec2. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. <PipelineTag pipeline="audio-classification"/>
158_7_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#resources
.md
<PipelineTag pipeline="audio-classification"/> - A notebook on how to [leverage a pretrained Wav2Vec2 model for emotion classification](https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb). 🌎
158_7_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#resources
.md
- [`Wav2Vec2ForCTC`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb). - [Audio classification task guide](../tasks/audio_classification) <PipelineTag pipeline="automatic-speech-recognition"/>
158_7_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#resources
.md
- [Audio classification task guide](../tasks/audio_classification) <PipelineTag pipeline="automatic-speech-recognition"/> - A blog post on [boosting Wav2Vec2 with n-grams in πŸ€— Transformers](https://huggingface.co/blog/wav2vec2-with-ngram). - A blog post on how to [finetune Wav2Vec2 for English ASR with πŸ€— Transformers](https://huggingface.co/blog/fine-tune-wav2vec2-english).
158_7_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#resources
.md
- A blog post on [finetuning XLS-R for Multi-Lingual ASR with πŸ€— Transformers](https://huggingface.co/blog/fine-tune-xlsr-wav2vec2). - A notebook on how to [create YouTube captions from any video by transcribing audio with Wav2Vec2](https://colab.research.google.com/github/Muennighoff/ytclipcc/blob/main/wav2vec_youtube_captions.ipynb). 🌎
158_7_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#resources
.md
- [`Wav2Vec2ForCTC`] is supported by a notebook on [how to finetune a speech recognition model in English](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/speech_recognition.ipynb), and [how to finetune a speech recognition model in any language](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multi_lingual_speech_recognition.ipynb). - [Automatic speech recognition task guide](../tasks/asr) πŸš€ Deploy
158_7_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#resources
.md
- [Automatic speech recognition task guide](../tasks/asr) πŸš€ Deploy - A blog post on how to deploy Wav2Vec2 for [Automatic Speech Recognition with Hugging Face's Transformers & Amazon SageMaker](https://www.philschmid.de/automatic-speech-recognition-sagemaker).
158_7_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
This is the configuration class to store the configuration of a [`Wav2Vec2Model`]. It is used to instantiate an Wav2Vec2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Wav2Vec2 [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) architecture.
158_8_0
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
[facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) architecture. Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from [`PretrainedConfig`] for more information. Args: vocab_size (`int`, *optional*, defaults to 32): Vocabulary size of the Wav2Vec2 model. Defines the number of different tokens that can be represented by
158_8_1
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
Vocabulary size of the Wav2Vec2 model. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling [`Wav2Vec2Model`] or [`TFWav2Vec2Model`]. Vocabulary size of the model. Defines the different tokens that can be represented by the *inputs_ids* passed to the forward method of [`Wav2Vec2Model`]. hidden_size (`int`, *optional*, defaults to 768): Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12):
158_8_2
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (`int`, *optional*, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (`int`, *optional*, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (`int`, *optional*, defaults to 3072): Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
158_8_3
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. hidden_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
158_8_4
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. activation_dropout (`float`, *optional*, defaults to 0.1): The dropout ratio for activations inside the fully connected layer. attention_dropout (`float`, *optional*, defaults to 0.1): The dropout ratio for the attention probabilities. final_dropout (`float`, *optional*, defaults to 0.1): The dropout probability for the final projection layer of [`Wav2Vec2ForCTC`].
158_8_5
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
The dropout probability for the final projection layer of [`Wav2Vec2ForCTC`]. layerdrop (`float`, *optional*, defaults to 0.1): The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. initializer_range (`float`, *optional*, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers.
158_8_6
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
layer_norm_eps (`float`, *optional*, defaults to 1e-12): The epsilon used by the layer normalization layers. feat_extract_norm (`str`, *optional*, defaults to `"group"`): The norm to be applied to 1D convolutional layers in feature encoder. One of `"group"` for group normalization of only the first 1D convolutional layer or `"layer"` for layer normalization of all 1D convolutional layers. feat_proj_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for output of the feature encoder.
158_8_7
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
feat_proj_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for output of the feature encoder. feat_extract_activation (`str, `optional`, defaults to `"gelu"`): The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported. feat_quantizer_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for quantized feature encoder states.
158_8_8
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
feat_quantizer_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for quantized feature encoder states. conv_dim (`Tuple[int]` or `List[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`): A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers.
158_8_9
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
feature encoder. The length of *conv_dim* defines the number of 1D convolutional layers. conv_stride (`Tuple[int]` or `List[int]`, *optional*, defaults to `(5, 2, 2, 2, 2, 2, 2)`): A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of *conv_stride* defines the number of convolutional layers and has to match the length of *conv_dim*. conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)`):
158_8_10
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
conv_kernel (`Tuple[int]` or `List[int]`, *optional*, defaults to `(10, 3, 3, 3, 3, 3, 3)`): A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of *conv_kernel* defines the number of convolutional layers and has to match the length of *conv_dim*. conv_bias (`bool`, *optional*, defaults to `False`): Whether the 1D convolutional layers have a bias. num_conv_pos_embeddings (`int`, *optional*, defaults to 128):
158_8_11
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
Whether the 1D convolutional layers have a bias. num_conv_pos_embeddings (`int`, *optional*, defaults to 128): Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. num_conv_pos_embedding_groups (`int`, *optional*, defaults to 16): Number of groups of 1D convolutional positional embeddings layer. do_stable_layer_norm (`bool`, *optional*, defaults to `False`):
158_8_12
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
do_stable_layer_norm (`bool`, *optional*, defaults to `False`): Whether to apply *stable* layer norm architecture of the Transformer encoder. `do_stable_layer_norm is True` corresponds to applying layer norm before the attention layer, whereas `do_stable_layer_norm is False` corresponds to applying layer norm after the attention layer. apply_spec_augment (`bool`, *optional*, defaults to `True`): Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see
158_8_13
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
Whether to apply *SpecAugment* data augmentation to the outputs of the feature encoder. For reference see [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779). mask_time_prob (`float`, *optional*, defaults to 0.05): Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If
158_8_14
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
procecure generates ''mask_time_prob*len(time_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_time_prob* should be `prob_vector_start*mask_time_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_time_length (`int`, *optional*, defaults to 10): Length of vector span along the time axis.
158_8_15
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
mask_time_length (`int`, *optional*, defaults to 10): Length of vector span along the time axis. mask_time_min_masks (`int`, *optional*, defaults to 2),: The minimum number of masks of length `mask_feature_length` generated along the time axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks'' mask_feature_prob (`float`, *optional*, defaults to 0.0):
158_8_16
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
mask_time_min_masks'' mask_feature_prob (`float`, *optional*, defaults to 0.0): Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ''mask_feature_prob*len(feature_axis)/mask_time_length'' independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap
158_8_17
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
span to be masked, *mask_feature_prob* should be `prob_vector_start*mask_feature_length`. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if `apply_spec_augment is True`. mask_feature_length (`int`, *optional*, defaults to 10): Length of vector span along the feature axis. mask_feature_min_masks (`int`, *optional*, defaults to 0),: The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time
158_8_18
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
The minimum number of masks of length `mask_feature_length` generated along the feature axis, each time step, irrespectively of `mask_feature_prob`. Only relevant if ''mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks'' num_codevectors_per_group (`int`, *optional*, defaults to 320): Number of entries in each quantization codebook (group). num_codevector_groups (`int`, *optional*, defaults to 2): Number of codevector groups for product codevector quantization.
158_8_19
/Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_doc/wav2vec2.md
https://huggingface.co/docs/transformers/en/model_doc/wav2vec2/#wav2vec2config
.md
num_codevector_groups (`int`, *optional*, defaults to 2): Number of codevector groups for product codevector quantization. contrastive_logits_temperature (`float`, *optional*, defaults to 0.1): The temperature *kappa* in the contrastive loss. feat_quantizer_dropout (`float`, *optional*, defaults to 0.0): The dropout probability for the output of the feature encoder that's used by the quantizer. num_negatives (`int`, *optional*, defaults to 100): Number of negative samples for the contrastive loss.
158_8_20