code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
noise: Optional[torch.FloatTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optio... |
interpolate_pos_encoding (`bool`, *optional*, default `False`):
Whether to interpolate the pre-trained position encodings. This is mainly used to use the model on higher
resolution images.
noise (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
... | forward | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_vit_mae.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings: torch.Tensor) -> torch.Tensor:
"""
This method is a modified version of the interpolation function for ViT-mae model at the decoder, that
allows to interpolate the pre-trained decoder position encodings, to be able to use the model on higher
... |
This method is a modified version of the interpolation function for ViT-mae model at the decoder, that
allows to interpolate the pre-trained decoder position encodings, to be able to use the model on higher
resolution images.
Adapted from:
https://github.com/facebookresearch/di... | interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_vit_mae.py | Apache-2.0 |
def patchify(self, pixel_values, interpolate_pos_encoding: bool = False):
"""
Args:
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Pixel values.
interpolate_pos_encoding (`bool`, *optional*, default `False`):
... |
Args:
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Pixel values.
interpolate_pos_encoding (`bool`, *optional*, default `False`):
interpolation flag passed during the forward pass.
Returns:
`... | patchify | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_vit_mae.py | Apache-2.0 |
def unpatchify(self, patchified_pixel_values, original_image_size: Optional[Tuple[int, int]] = None):
"""
Args:
patchified_pixel_values (`torch.FloatTensor` of shape `(batch_size, num_patches, patch_size**2 * num_channels)`:
Patchified pixel values.
original_image... |
Args:
patchified_pixel_values (`torch.FloatTensor` of shape `(batch_size, num_patches, patch_size**2 * num_channels)`:
Patchified pixel values.
original_image_size (`Tuple[int, int]`, *optional*):
Original image size.
Returns:
`torch.... | unpatchify | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_vit_mae.py | Apache-2.0 |
def forward_loss(self, pixel_values, pred, mask, interpolate_pos_encoding: bool = False):
"""
Args:
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Pixel values.
pred (`torch.FloatTensor` of shape `(batch_size, num_patches,... |
Args:
pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`):
Pixel values.
pred (`torch.FloatTensor` of shape `(batch_size, num_patches, patch_size**2 * num_channels)`:
Predicted pixel values.
mask (`torch.Flo... | forward_loss | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_vit_mae.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
noise: Optional[torch.FloatTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optio... |
interpolate_pos_encoding (`bool`, *optional*, default `False`):
Whether to interpolate the pre-trained position encodings. This is mainly used to use the model on higher
resolution images.
noise (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*):
... | forward | python | huggingface/transformers | src/transformers/models/vit_mae/modeling_vit_mae.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_mae/modeling_vit_mae.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
... |
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
Adapted from:
- https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362... | interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/vit_msn/modeling_vit_msn.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_msn/modeling_vit_msn.py | Apache-2.0 |
def __init__(self, config: ViTMSNConfig, use_mask_token: bool = False):
r"""
use_mask_token (`bool`, *optional*, defaults to `False`):
Whether to use a mask token for masked image modeling.
"""
super().__init__(config)
self.config = config
self.embeddings = V... |
use_mask_token (`bool`, *optional*, defaults to `False`):
Whether to use a mask token for masked image modeling.
| __init__ | python | huggingface/transformers | src/transformers/models/vit_msn/modeling_vit_msn.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_msn/modeling_vit_msn.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
bool_masked_pos: Optional[torch.BoolTensor] = None,
head_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_enc... |
bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, num_patches)`, *optional*):
Boolean masked positions. Indicates which patches are masked (1) and which aren't (0).
Examples:
```python
>>> from transformers import AutoImageProcessor, ViTMSNModel
>>> import... | forward | python | huggingface/transformers | src/transformers/models/vit_msn/modeling_vit_msn.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_msn/modeling_vit_msn.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.Tensor] = None,
head_mask: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_encoding: Option... |
Examples:
```python
>>> from transformers import AutoImageProcessor, ViTMSNForImageClassification
>>> import torch
>>> from PIL import Image
>>> import requests
>>> torch.manual_seed(2) # doctest: +IGNORE_RESULT
>>> url = "http://images.cocodataset.or... | forward | python | huggingface/transformers | src/transformers/models/vit_msn/modeling_vit_msn.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vit_msn/modeling_vit_msn.py | Apache-2.0 |
def resize(
self,
image: np.ndarray,
size: Dict[str, int],
resample: PILImageResampling = PILImageResampling.BILINEAR,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
) -> ... |
Resize an image.
Args:
image (`np.ndarray`):
Image to resize.
size (`Dict[str, int]`):
Size of the output image. If `size` is of the form `{"height": h, "width": w}`, the output image will
have the size `(h, w)`. If `size` is of t... | resize | python | huggingface/transformers | src/transformers/models/vivit/image_processing_vivit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vivit/image_processing_vivit.py | Apache-2.0 |
def rescale(
self,
image: np.ndarray,
scale: Union[int, float],
offset: bool = True,
data_format: Optional[Union[str, ChannelDimension]] = None,
input_data_format: Optional[Union[str, ChannelDimension]] = None,
**kwargs,
):
"""
Rescale an image... |
Rescale an image by a scale factor.
If `offset` is `True`, the image has its values rescaled by `scale` and then offset by 1. If `scale` is
1/127.5, the image is rescaled between [-1, 1].
image = image * scale - 1
If `offset` is `False`, and `scale` is 1/255, the image is ... | rescale | python | huggingface/transformers | src/transformers/models/vivit/image_processing_vivit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vivit/image_processing_vivit.py | Apache-2.0 |
def preprocess(
self,
videos: ImageInput,
do_resize: Optional[bool] = None,
size: Optional[Dict[str, int]] = None,
resample: PILImageResampling = None,
do_center_crop: Optional[bool] = None,
crop_size: Optional[Dict[str, int]] = None,
do_rescale: Optional[... |
Preprocess an image or batch of images.
Args:
videos (`ImageInput`):
Video frames to preprocess. Expects a single or batch of video frames with pixel values ranging from 0
to 255. If passing in frames with pixel values between 0 and 1, set `do_rescale=False`... | preprocess | python | huggingface/transformers | src/transformers/models/vivit/image_processing_vivit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vivit/image_processing_vivit.py | Apache-2.0 |
def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
... |
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
Adapted from:
- https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362... | interpolate_pos_encoding | python | huggingface/transformers | src/transformers/models/vivit/modeling_vivit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vivit/modeling_vivit.py | Apache-2.0 |
def __init__(self, config, add_pooling_layer=True):
r"""
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
"""
super().__init__(config)
self.config = config
self.embeddings = VivitEmbeddings(config)
self.encoder ... |
add_pooling_layer (bool, *optional*, defaults to `True`):
Whether to add a pooling layer
| __init__ | python | huggingface/transformers | src/transformers/models/vivit/modeling_vivit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vivit/modeling_vivit.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_encoding: bool = False,
return_dict: Optional... |
Examples:
```python
>>> import av
>>> import numpy as np
>>> from transformers import VivitImageProcessor, VivitModel
>>> from huggingface_hub import hf_hub_download
>>> np.random.seed(0)
>>> def read_video_pyav(container, indices):
... '... | forward | python | huggingface/transformers | src/transformers/models/vivit/modeling_vivit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vivit/modeling_vivit.py | Apache-2.0 |
def forward(
self,
pixel_values: Optional[torch.FloatTensor] = None,
head_mask: Optional[torch.FloatTensor] = None,
labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
interpolate_pos_en... |
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
`config.n... | forward | python | huggingface/transformers | src/transformers/models/vivit/modeling_vivit.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/vivit/modeling_vivit.py | Apache-2.0 |
def convert_wav2vec2_checkpoint(
checkpoint_path, pytorch_dump_folder_path, config_path=None, dict_path=None, is_finetuned=True, is_seq_class=False
):
"""
Copy/paste/tweak model's weights to transformers design.
"""
if config_path is not None:
config = Wav2Vec2Config.from_pretrained(config_p... |
Copy/paste/tweak model's weights to transformers design.
| convert_wav2vec2_checkpoint | python | huggingface/transformers | src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py | Apache-2.0 |
def convert_s3prl_checkpoint(base_model_name, config_path, checkpoint_path, model_dump_path):
"""
Copy/paste/tweak model's weights to transformers design.
"""
checkpoint = torch.load(checkpoint_path, map_location="cpu", weights_only=True)
downstream_dict = checkpoint["Downstream"]
hf_config = ... |
Copy/paste/tweak model's weights to transformers design.
| convert_s3prl_checkpoint | python | huggingface/transformers | src/transformers/models/wav2vec2/convert_wav2vec2_original_s3prl_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/convert_wav2vec2_original_s3prl_checkpoint_to_pytorch.py | Apache-2.0 |
def zero_mean_unit_var_norm(
input_values: List[np.ndarray], attention_mask: List[np.ndarray], padding_value: float = 0.0
) -> List[np.ndarray]:
"""
Every array in the list is normalized to have zero mean and unit variance
"""
if attention_mask is not None:
attent... |
Every array in the list is normalized to have zero mean and unit variance
| zero_mean_unit_var_norm | python | huggingface/transformers | src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py | Apache-2.0 |
def __call__(
self,
raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
padding: Union[bool, str, PaddingStrategy] = False,
max_length: Optional[int] = None,
truncation: bool = False,
pad_to_multiple_of: Optional[int] = None,
return_at... |
Main method to featurize and prepare for the model one or several sequence(s).
Args:
raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`):
The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float
... | __call__ | python | huggingface/transformers | src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py | Apache-2.0 |
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
attention_mask: Optional[np.ndarray] = None,
min_masks: int = 0,
) -> np.ndarray:
"""
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for
... |
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for
ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on
CPU as part of the preprocessing during training.
Args:
shape: t... | _compute_mask_indices | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py | Apache-2.0 |
def _get_feat_extract_output_lengths(
self, input_lengths: Union[jnp.ndarray, int], add_adapter: Optional[bool] = None
):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.add_adapter if add_adapter is None else add_adapter
def... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py | Apache-2.0 |
def _get_feat_extract_output_lengths(
self,
input_lengths: Union[jnp.ndarray, int],
add_adapter: Optional[bool] = None,
):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.add_adapter if add_adapter is None else add_ada... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py | Apache-2.0 |
def _get_feat_extract_output_lengths(
self, input_lengths: Union[jnp.ndarray, int], add_adapter: Optional[bool] = None
):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.add_adapter if add_adapter is None else add_adapter
def... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py | Apache-2.0 |
def _sample_without_replacement(distribution, num_samples):
"""
Categorical sampling without replacement is currently not implemented. The gumbel-max trick will do for now - see
https://github.com/tensorflow/tensorflow/issues/9260 for more info
"""
z = -tf.math.log(tf.random.uniform(shape_list(distr... |
Categorical sampling without replacement is currently not implemented. The gumbel-max trick will do for now - see
https://github.com/tensorflow/tensorflow/issues/9260 for more info
| _sample_without_replacement | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def _scatter_values_on_batch_indices(values, batch_indices, output_shape):
"""
Scatter function as in PyTorch with indices in format (batch_dim, indixes)
"""
indices_shape = shape_list(batch_indices)
# broadcast batch dim to indices_shape
broad_casted_batch_dims = tf.reshape(
tf.broadcas... |
Scatter function as in PyTorch with indices in format (batch_dim, indixes)
| _scatter_values_on_batch_indices | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
min_masks: int = 0,
) -> tf.Tensor:
"""
Computes random mask spans for a given shape
Args:
shape: the shape for which to compute masks.
should be of size 2 where first element is batch... |
Computes random mask spans for a given shape
Args:
shape: the shape for which to compute masks.
should be of size 2 where first element is batch size and 2nd is timesteps
attention_mask: optional padding mask of the same size as shape, which will prevent masking padded elements
... | _compute_mask_indices | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def _init_norm(self):
"""Set the norm of the weight vector."""
kernel_norm = tf.sqrt(tf.reduce_sum(tf.square(self.weight_v), axis=self.kernel_norm_axes))
self.weight_g.assign(kernel_norm[:, tf.newaxis, tf.newaxis]) | Set the norm of the weight vector. | _init_norm | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def _get_feat_extract_output_lengths(self, input_lengths: tf.Tensor):
"""
Computes the output length of the convolutional layers
"""
def _conv_out_length(input_length, kernel_size, stride):
# 1D convolutional layer output length formula taken
# from https://pytor... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def _mask_hidden_states(self, hidden_states: tf.Tensor, mask_time_indices: tf.Tensor | None = None):
"""
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
"""
batch_size, sequence_length, hidden_size =... |
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
| _mask_hidden_states | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def _get_feat_extract_output_lengths(self, input_lengths, add_adapter=None):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.add_adapter if add_adapter is None else add_adapter
def _conv_out_length(input_length, kernel_size, stride):
... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def call(
self,
input_values: tf.Tensor,
attention_mask: tf.Tensor | None = None,
token_type_ids: tf.Tensor | None = None,
position_ids: tf.Tensor | None = None,
head_mask: tf.Tensor | None = None,
inputs_embeds: tf.Tensor | None = None,
output_attentions:... |
Returns:
Example:
```python
>>> from transformers import AutoProcessor, TFWav2Vec2Model
>>> from datasets import load_dataset
>>> import soundfile as sf
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
>>> model = TFWav2Vec... | call | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be ... |
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def call(
self,
input_values: tf.Tensor,
attention_mask: tf.Tensor | None = None,
token_type_ids: tf.Tensor | None = None,
position_ids: tf.Tensor | None = None,
head_mask: tf.Tensor | None = None,
inputs_embeds: tf.Tensor | None = None,
output_attentions:... |
labels (`tf.Tensor` or `np.ndarray` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_values` docstring) Tokens with indices set to `-100` are ignored (maske... | call | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be ... |
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for layer in self.wav2vec2.layers:
layer.train... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py | Apache-2.0 |
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
attention_mask: Optional[torch.LongTensor] = None,
min_masks: int = 0,
) -> np.ndarray:
"""
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method f... |
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for
ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on
CPU as part of the preprocessing during training.
Args:
shape: T... | _compute_mask_indices | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def compute_num_masked_span(input_length):
"""Given input length, compute how many spans should be masked"""
num_masked_span = int(mask_prob * input_length / mask_length + epsilon)
num_masked_span = max(num_masked_span, min_masks)
# make sure num masked span <= sequence_length
i... | Given input length, compute how many spans should be masked | compute_num_masked_span | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def __init__(self, config):
"""
Implements adapter modules directly with 3D tensor weight as parameters and without using ModuleList to speed
up training throughput.
"""
super().__init__()
self.input_dim = config.adapter_attn_dim
self.hidden_dim = config.hidden_si... |
Implements adapter modules directly with 3D tensor weight as parameters and without using ModuleList to speed
up training throughput.
| __init__ | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def _get_feat_extract_output_lengths(
self, input_lengths: Union[torch.LongTensor, int], add_adapter: Optional[bool] = None
):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.add_adapter if add_adapter is None else add_adapter
... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def init_adapter_layers(self):
"""
(Re-)initialize attention adapter layers and lm head for adapter-only fine-tuning
"""
# init attention adapters
for module in self.modules():
if isinstance(module, Wav2Vec2AttnAdapterLayer):
self._init_weights(module)... |
(Re-)initialize attention adapter layers and lm head for adapter-only fine-tuning
| init_adapter_layers | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be ... |
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def _mask_hidden_states(
self,
hidden_states: torch.FloatTensor,
mask_time_indices: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
):
"""
Masks extracted features along time axis and/or along feature axis according to
[S... |
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
| _mask_hidden_states | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
mask_time_indices: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optio... |
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in *config.proj_codevector_dim* space.
| forward | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be ... |
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def compute_contrastive_logits(
target_features: torch.FloatTensor,
negative_features: torch.FloatTensor,
predicted_features: torch.FloatTensor,
temperature: int = 0.1,
):
"""
Compute logits for contrastive loss based using cosine similarity as the distance measure be... |
Compute logits for contrastive loss based using cosine similarity as the distance measure between
`[positive_feature, negative_features]` and `[predicted_features]`. Additionally, temperature can be applied.
| compute_contrastive_logits | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
mask_time_indices: Optional[torch.BoolTensor] = None,
sampled_negative_indices: Optional[torch.BoolTensor] = None,
output_attentions: Optional[bool] = None,
out... |
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in *config.proj_codevector_dim* space.
sampled_negative_... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def __init__(self, config, target_lang: Optional[str] = None):
r"""
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`Wav2Vec2ForCTC... |
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`Wav2Vec2ForCTC`] with adapters. Uses 'eng' by
default.
| __init__ | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def tie_weights(self):
"""
This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when
passing `target_lang=...` to `from_pretrained(...)`.
This method is **not** supposed to be called by the user and is prone to be changed in the future.... |
This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when
passing `target_lang=...` to `from_pretrained(...)`.
This method is **not** supposed to be called by the user and is prone to be changed in the future.
| tie_weights | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be r... |
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2.parameters():
param... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
labels (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*):
Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to
the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size -... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be ... |
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2.parameters():
param... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be r... |
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2.parameters():
param... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be r... |
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2.parameters():
param... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def _get_tdnn_output_lengths(self, input_lengths: Union[torch.LongTensor, int]):
"""
Computes the output length of the TDNN layers
"""
def _conv_out_length(input_length, kernel_size, stride):
# 1D convolutional layer output length formula taken
# from https://pyt... |
Computes the output length of the TDNN layers
| _get_tdnn_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2/modeling_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/modeling_wav2vec2.py | Apache-2.0 |
def __call__(
self,
audio: AudioInput = None,
text: Optional[Union[str, List[str], TextInput, PreTokenizedInput]] = None,
images=None,
videos=None,
**kwargs: Unpack[Wav2Vec2ProcessorKwargs],
):
"""
This method forwards all its arguments to Wav2Vec2Feat... |
This method forwards all its arguments to Wav2Vec2FeatureExtractor's
[`~Wav2Vec2FeatureExtractor.__call__`] and returns its output.
| __call__ | python | huggingface/transformers | src/transformers/models/wav2vec2/processing_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/processing_wav2vec2.py | Apache-2.0 |
def pad(self, *args, **kwargs):
"""
This method forwards all its arguments to Wav2Vec2FeatureExtractor's
[`~Wav2Vec2FeatureExtractor.pad`] and returns its output.
"""
# For backward compatibility
if self._in_target_context_manager:
return self.current_processo... |
This method forwards all its arguments to Wav2Vec2FeatureExtractor's
[`~Wav2Vec2FeatureExtractor.pad`] and returns its output.
| pad | python | huggingface/transformers | src/transformers/models/wav2vec2/processing_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/processing_wav2vec2.py | Apache-2.0 |
def as_target_processor(self):
"""
Temporarily sets the tokenizer for processing the input. Useful for encoding the labels when fine-tuning
Wav2Vec2.
"""
warnings.warn(
"`as_target_processor` is deprecated and will be removed in v5 of Transformers. You can process you... |
Temporarily sets the tokenizer for processing the input. Useful for encoding the labels when fine-tuning
Wav2Vec2.
| as_target_processor | python | huggingface/transformers | src/transformers/models/wav2vec2/processing_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/processing_wav2vec2.py | Apache-2.0 |
def set_target_lang(self, target_lang: str):
"""
Set the target language of a nested multi-lingual dictionary
"""
if self.vocab == self.encoder:
raise ValueError(f"{self.vocab} is not a multi-lingual, nested tokenizer. Cannot set target language.")
if target_lang not... |
Set the target language of a nested multi-lingual dictionary
| set_target_lang | python | huggingface/transformers | src/transformers/models/wav2vec2/tokenization_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/tokenization_wav2vec2.py | Apache-2.0 |
def _tokenize(self, text, **kwargs):
"""
Converts a string into a sequence of tokens (string), using the tokenizer.
"""
if self.do_lower_case:
text = text.upper()
return list(text.replace(" ", self.word_delimiter_token)) |
Converts a string into a sequence of tokens (string), using the tokenizer.
| _tokenize | python | huggingface/transformers | src/transformers/models/wav2vec2/tokenization_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/tokenization_wav2vec2.py | Apache-2.0 |
def convert_tokens_to_string(
self,
tokens: List[str],
group_tokens: bool = True,
spaces_between_special_tokens: bool = False,
output_char_offsets: bool = False,
output_word_offsets: bool = False,
) -> Dict[str, Union[str, float]]:
"""
Converts a conne... |
Converts a connectionist-temporal-classification (CTC) output tokens into a single string.
| convert_tokens_to_string | python | huggingface/transformers | src/transformers/models/wav2vec2/tokenization_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/tokenization_wav2vec2.py | Apache-2.0 |
def _decode(
self,
token_ids: List[int],
skip_special_tokens: bool = False,
clean_up_tokenization_spaces: Optional[bool] = None,
group_tokens: bool = True,
spaces_between_special_tokens: bool = False,
output_word_offsets: Optional[bool] = False,
output_cha... |
special _decode function is needed for Wav2Vec2Tokenizer because added tokens should be treated exactly the
same as tokens of the base vocabulary and therefore the function `convert_tokens_to_string` has to be called on
the whole token list and not individually on added tokens
| _decode | python | huggingface/transformers | src/transformers/models/wav2vec2/tokenization_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/tokenization_wav2vec2.py | Apache-2.0 |
def batch_decode(
self,
sequences: Union[List[int], List[List[int]], "np.ndarray", "torch.Tensor", "tf.Tensor"],
skip_special_tokens: bool = False,
clean_up_tokenization_spaces: Optional[bool] = None,
output_char_offsets: bool = False,
output_word_offsets: bool = False,
... |
Convert a list of lists of token ids into a list of strings by calling decode.
Args:
sequences (`Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]`):
List of tokenized input ids. Can be obtained using the `__call__` method.
skip_special_toke... | batch_decode | python | huggingface/transformers | src/transformers/models/wav2vec2/tokenization_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/tokenization_wav2vec2.py | Apache-2.0 |
def word_delimiter_token(self) -> str:
"""
`str`: Padding token. Log an error if used while not having been set.
"""
if self._word_delimiter_token is None and self.verbose:
logger.error("Using word_delimiter_token, but it is not set yet.")
return None
retu... |
`str`: Padding token. Log an error if used while not having been set.
| word_delimiter_token | python | huggingface/transformers | src/transformers/models/wav2vec2/tokenization_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/tokenization_wav2vec2.py | Apache-2.0 |
def __call__(
self,
raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
padding: Union[bool, str, PaddingStrategy] = False,
max_length: Optional[int] = None,
pad_to_multiple_of: Optional[int] = None,
padding_side: Optional[str] = None,
... |
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of
sequences.
Args:
raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`):
The sequence or batch of sequences to be padded. Each sequen... | __call__ | python | huggingface/transformers | src/transformers/models/wav2vec2/tokenization_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/tokenization_wav2vec2.py | Apache-2.0 |
def convert_tokens_to_string(self, tokens: List[str]) -> str:
"""
Converts a connectionist-temporal-classification (CTC) output tokens into a single string.
"""
# group same tokens into non-repeating tokens in CTC style decoding
grouped_tokens = [token_group[0] for token_group in... |
Converts a connectionist-temporal-classification (CTC) output tokens into a single string.
| convert_tokens_to_string | python | huggingface/transformers | src/transformers/models/wav2vec2/tokenization_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/tokenization_wav2vec2.py | Apache-2.0 |
def _decode(
self,
token_ids: List[int],
skip_special_tokens: bool = False,
clean_up_tokenization_spaces: Optional[bool] = None,
**kwargs,
) -> str:
"""
special _decode function is needed for Wav2Vec2Tokenizer because added tokens should be treated exactly the... |
special _decode function is needed for Wav2Vec2Tokenizer because added tokens should be treated exactly the
same as tokens of the base vocabulary and therefore the function `convert_tokens_to_string` has to be called on
the whole token list and not individually on added tokens
| _decode | python | huggingface/transformers | src/transformers/models/wav2vec2/tokenization_wav2vec2.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/tokenization_wav2vec2.py | Apache-2.0 |
def convert_wav2vec2_bert_checkpoint(
checkpoint_path,
pytorch_dump_folder_path,
config_path=None,
repo_id=None,
):
"""
Copy/paste/tweak model's weights to transformers design.
"""
if config_path is not None:
config = Wav2Vec2BertConfig.from_pretrained(config_path, hidden_act="sw... |
Copy/paste/tweak model's weights to transformers design.
| convert_wav2vec2_bert_checkpoint | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/convert_wav2vec2_seamless_checkpoint.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/convert_wav2vec2_seamless_checkpoint.py | Apache-2.0 |
def _compute_new_attention_mask(hidden_states: torch.Tensor, seq_lens: torch.Tensor):
"""
Computes an attention mask of the form `(batch, seq_len)` with an attention for each element in the batch that
stops at the corresponding element in `seq_lens`.
Args:
hidden_states (`torch.FloatTensor` of s... |
Computes an attention mask of the form `(batch, seq_len)` with an attention for each element in the batch that
stops at the corresponding element in `seq_lens`.
Args:
hidden_states (`torch.FloatTensor` of shape `(batch, seq_len, *)`):
The sequences to mask, where `*` is any number of se... | _compute_new_attention_mask | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def _get_feat_extract_output_lengths(
self, input_lengths: Union[torch.LongTensor, int], add_adapter: Optional[bool] = None
):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.add_adapter if add_adapter is None else add_adapter
... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
attention_mask: Optional[torch.LongTensor] = None,
min_masks: int = 0,
) -> np.ndarray:
"""
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method f... |
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for
ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on
CPU as part of the preprocessing during training.
Args:
shape: T... | _compute_mask_indices | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def compute_num_masked_span(input_length):
"""Given input length, compute how many spans should be masked"""
num_masked_span = int(mask_prob * input_length / mask_length + epsilon)
num_masked_span = max(num_masked_span, min_masks)
# make sure num masked span <= sequence_length
i... | Given input length, compute how many spans should be masked | compute_num_masked_span | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def _mask_hidden_states(
self,
hidden_states: torch.FloatTensor,
mask_time_indices: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
):
"""
Masks extracted features along time axis and/or along feature axis according to
[S... |
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
| _mask_hidden_states | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
mask_time_indices: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Opt... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip ins... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def __init__(self, config, target_lang: Optional[str] = None):
r"""
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`UniSpeechSatFo... |
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`UniSpeechSatForCTC`] with adapters. Uses 'eng' by
default.
| __init__ | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = No... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip ins... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2_bert.parameters():
... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = No... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip ins... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2_bert.parameters():
... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = No... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip ins... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2_bert.parameters():
... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def _get_tdnn_output_lengths(self, input_lengths: Union[torch.LongTensor, int]):
"""
Computes the output length of the TDNN layers
"""
def _conv_out_length(input_length, kernel_size, stride):
# 1D convolutional layer output length formula taken
# from https://pyt... |
Computes the output length of the TDNN layers
| _get_tdnn_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = No... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip ins... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modeling_wav2vec2_bert.py | Apache-2.0 |
def _compute_new_attention_mask(hidden_states: torch.Tensor, seq_lens: torch.Tensor):
"""
Computes an attention mask of the form `(batch, seq_len)` with an attention for each element in the batch that
stops at the corresponding element in `seq_lens`.
Args:
hidden_states (`torch.FloatTensor` of s... |
Computes an attention mask of the form `(batch, seq_len)` with an attention for each element in the batch that
stops at the corresponding element in `seq_lens`.
Args:
hidden_states (`torch.FloatTensor` of shape `(batch, seq_len, *)`):
The sequences to mask, where `*` is any number of se... | _compute_new_attention_mask | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | Apache-2.0 |
def _get_feat_extract_output_lengths(
self, input_lengths: Union[torch.LongTensor, int], add_adapter: Optional[bool] = None
):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.add_adapter if add_adapter is None else add_adapter
... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
mask_time_indices: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Opt... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip ins... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = No... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip ins... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2_bert.parameters():
... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = No... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip ins... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = No... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip ins... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = No... |
input_features (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip ins... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/modular_wav2vec2_bert.py | Apache-2.0 |
def __call__(
self,
audio: AudioInput = None,
text: Optional[Union[str, List[str], TextInput, PreTokenizedInput]] = None,
images=None,
videos=None,
**kwargs: Unpack[Wav2Vec2BertProcessorKwargs],
):
"""
Main method to prepare for the model one or severa... |
Main method to prepare for the model one or several sequences(s) and audio(s). This method forwards the `audio`
and `kwargs` arguments to SeamlessM4TFeatureExtractor's [`~SeamlessM4TFeatureExtractor.__call__`] if `audio` is not
`None` to pre-process the audio. To prepare the target sequences(s)... | __call__ | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/processing_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/processing_wav2vec2_bert.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.