code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def pad(self, input_features=None, labels=None, **kwargs):
"""
If `input_features` is not `None`, this method forwards the `input_features` and `kwargs` arguments to SeamlessM4TFeatureExtractor's [`~SeamlessM4TFeatureExtractor.pad`] to pad the input features.
If `labels` is not `None`, this meth... |
If `input_features` is not `None`, this method forwards the `input_features` and `kwargs` arguments to SeamlessM4TFeatureExtractor's [`~SeamlessM4TFeatureExtractor.pad`] to pad the input features.
If `labels` is not `None`, this method forwards the `labels` and `kwargs` arguments to PreTrainedTokenizer... | pad | python | huggingface/transformers | src/transformers/models/wav2vec2_bert/processing_wav2vec2_bert.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_bert/processing_wav2vec2_bert.py | Apache-2.0 |
def convert_wav2vec2_conformer_checkpoint(
checkpoint_path, pytorch_dump_folder_path, config_path=None, dict_path=None, is_finetuned=True
):
"""
Copy/paste/tweak model's weights to transformers design.
"""
if config_path is not None:
config = Wav2Vec2ConformerConfig.from_pretrained(config_pa... |
Copy/paste/tweak model's weights to transformers design.
| convert_wav2vec2_conformer_checkpoint | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/convert_wav2vec2_conformer_original_pytorch_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/convert_wav2vec2_conformer_original_pytorch_checkpoint_to_pytorch.py | Apache-2.0 |
def _get_feat_extract_output_lengths(
self, input_lengths: Union[torch.LongTensor, int], add_adapter: Optional[bool] = None
):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.add_adapter if add_adapter is None else add_adapter
... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
attention_mask: Optional[torch.LongTensor] = None,
min_masks: int = 0,
) -> np.ndarray:
"""
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method f... |
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for
ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on
CPU as part of the preprocessing during training.
Args:
shape: T... | _compute_mask_indices | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def compute_num_masked_span(input_length):
"""Given input length, compute how many spans should be masked"""
num_masked_span = int(mask_prob * input_length / mask_length + epsilon)
num_masked_span = max(num_masked_span, min_masks)
# make sure num masked span <= sequence_length
i... | Given input length, compute how many spans should be masked | compute_num_masked_span | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def _mask_hidden_states(
self,
hidden_states: torch.FloatTensor,
mask_time_indices: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
):
"""
Masks extracted features along time axis and/or along feature axis according to
[S... |
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
| _mask_hidden_states | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
mask_time_indices: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optio... |
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in *config.proj_codevector_dim* space.
| forward | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def compute_contrastive_logits(
target_features: torch.FloatTensor,
negative_features: torch.FloatTensor,
predicted_features: torch.FloatTensor,
temperature: int = 0.1,
):
"""
Compute logits for contrastive loss based using cosine similarity as the distance measure be... |
Compute logits for contrastive loss based using cosine similarity as the distance measure between
`[positive_feature, negative_features]` and `[predicted_features]`. Additionally, temperature can be applied.
| compute_contrastive_logits | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
mask_time_indices: Optional[torch.BoolTensor] = None,
sampled_negative_indices: Optional[torch.BoolTensor] = None,
output_attentions: Optional[bool] = None,
out... |
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in *config.proj_codevector_dim* space.
sampled_negative_... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def __init__(self, config, target_lang: Optional[str] = None):
r"""
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`UniSpeechSatFo... |
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`UniSpeechSatForCTC`] with adapters. Uses 'eng' by
default.
| __init__ | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
labels (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*):
Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to
the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size -... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2_conformer.parameters():
... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2_conformer.parameters():
... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wav2vec2_conformer.parameters():
... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def _get_tdnn_output_lengths(self, input_lengths: Union[torch.LongTensor, int]):
"""
Computes the output length of the TDNN layers
"""
def _conv_out_length(input_length, kernel_size, stride):
# 1D convolutional layer output length formula taken
# from https://pyt... |
Computes the output length of the TDNN layers
| _get_tdnn_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py | Apache-2.0 |
def _get_feat_extract_output_lengths(
self, input_lengths: Union[torch.LongTensor, int], add_adapter: Optional[bool] = None
):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.add_adapter if add_adapter is None else add_adapter
... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wav2vec2_conformer/modular_wav2vec2_conformer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_conformer/modular_wav2vec2_conformer.py | Apache-2.0 |
def init_backend(self, phonemizer_lang: str):
"""
Initializes the backend.
Args:
phonemizer_lang (`str`): The language to be used.
"""
requires_backends(self, "phonemizer")
from phonemizer.backend import BACKENDS
self.backend = BACKENDS[self.phonemiz... |
Initializes the backend.
Args:
phonemizer_lang (`str`): The language to be used.
| init_backend | python | huggingface/transformers | src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | Apache-2.0 |
def prepare_for_tokenization(
self,
text: str,
is_split_into_words: bool = False,
phonemizer_lang: Optional[str] = None,
do_phonemize: Optional[bool] = None,
) -> Tuple[str, Dict[str, Any]]:
"""
Performs any necessary transformations before tokenization.
... |
Performs any necessary transformations before tokenization.
This method should pop the arguments from kwargs and return the remaining `kwargs` as well. We test the
`kwargs` at the end of the encoding process to be sure all the arguments have been used.
Args:
text (`str`):
... | prepare_for_tokenization | python | huggingface/transformers | src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | Apache-2.0 |
def _tokenize(self, text, **kwargs):
"""
Converts a string into a sequence of tokens (string), using the tokenizer.
"""
# make sure whitespace is stripped to prevent <unk>
text = text.strip()
# phonemize
if self.do_phonemize:
text = text.lower()
... |
Converts a string into a sequence of tokens (string), using the tokenizer.
| _tokenize | python | huggingface/transformers | src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | Apache-2.0 |
def convert_tokens_to_string(
self,
tokens: List[str],
group_tokens: bool = True,
spaces_between_special_tokens: bool = False,
filter_word_delimiter_token: bool = True,
output_char_offsets: bool = False,
) -> str:
"""
Converts a connectionist-temporal-... |
Converts a connectionist-temporal-classification (CTC) output tokens into a single string.
| convert_tokens_to_string | python | huggingface/transformers | src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | Apache-2.0 |
def _decode(
self,
token_ids: List[int],
skip_special_tokens: bool = False,
clean_up_tokenization_spaces: Optional[bool] = None,
group_tokens: bool = True,
filter_word_delimiter_token: bool = True,
spaces_between_special_tokens: bool = False,
output_char_o... |
special _decode function is needed for Wav2Vec2PhonemeTokenizer because added tokens should be treated exactly
the same as tokens of the base vocabulary and therefore the function `convert_tokens_to_string` has to be
called on the whole token list and not individually on added tokens
| _decode | python | huggingface/transformers | src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | Apache-2.0 |
def decode(
self,
token_ids: Union[int, List[int], "np.ndarray", "torch.Tensor", "tf.Tensor"],
skip_special_tokens: bool = False,
clean_up_tokenization_spaces: Optional[bool] = None,
output_char_offsets: bool = False,
**kwargs,
) -> str:
"""
Converts a... |
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
tokens and clean up tokenization spaces.
Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`.
Args:
token_ids (`Union[int, List[int... | decode | python | huggingface/transformers | src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | Apache-2.0 |
def batch_decode(
self,
sequences: Union[List[int], List[List[int]], "np.ndarray", "torch.Tensor", "tf.Tensor"],
skip_special_tokens: bool = False,
clean_up_tokenization_spaces: Optional[bool] = None,
output_char_offsets: bool = False,
**kwargs,
) -> List[str]:
... |
Convert a list of lists of token ids into a list of strings by calling decode.
Args:
sequences (`Union[List[int], List[List[int]], np.ndarray, torch.Tensor, tf.Tensor]`):
List of tokenized input ids. Can be obtained using the `__call__` method.
skip_special_toke... | batch_decode | python | huggingface/transformers | src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py | Apache-2.0 |
def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
r"""
Instantiate a [`Wav2Vec2ProcessorWithLM`] from a pretrained Wav2Vec2 processor.
<Tip>
This class method is simply calling the feature extractor's
[`~feature_extraction_utils.FeatureExtractionMixin.from_pret... |
Instantiate a [`Wav2Vec2ProcessorWithLM`] from a pretrained Wav2Vec2 processor.
<Tip>
This class method is simply calling the feature extractor's
[`~feature_extraction_utils.FeatureExtractionMixin.from_pretrained`], Wav2Vec2CTCTokenizer's
[`~tokenization_utils_base.PreTrainedT... | from_pretrained | python | huggingface/transformers | src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py | Apache-2.0 |
def __call__(self, *args, **kwargs):
"""
When used in normal mode, this method forwards all its arguments to the feature extractor's
[`~FeatureExtractionMixin.__call__`] and returns its output. If used in the context
[`~Wav2Vec2ProcessorWithLM.as_target_processor`] this method forwards a... |
When used in normal mode, this method forwards all its arguments to the feature extractor's
[`~FeatureExtractionMixin.__call__`] and returns its output. If used in the context
[`~Wav2Vec2ProcessorWithLM.as_target_processor`] this method forwards all its arguments to
Wav2Vec2CTCTokenizer... | __call__ | python | huggingface/transformers | src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py | Apache-2.0 |
def pad(self, *args, **kwargs):
"""
When used in normal mode, this method forwards all its arguments to the feature extractor's
[`~FeatureExtractionMixin.pad`] and returns its output. If used in the context
[`~Wav2Vec2ProcessorWithLM.as_target_processor`] this method forwards all its arg... |
When used in normal mode, this method forwards all its arguments to the feature extractor's
[`~FeatureExtractionMixin.pad`] and returns its output. If used in the context
[`~Wav2Vec2ProcessorWithLM.as_target_processor`] this method forwards all its arguments to
Wav2Vec2CTCTokenizer's [`... | pad | python | huggingface/transformers | src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py | Apache-2.0 |
def batch_decode(
self,
logits: np.ndarray,
pool: Optional[Pool] = None,
num_processes: Optional[int] = None,
beam_width: Optional[int] = None,
beam_prune_logp: Optional[float] = None,
token_min_logp: Optional[float] = None,
hotwords: Optional[Iterable[str... |
Batch decode output logits to audio transcription with language model support.
<Tip>
This function makes use of Python's multiprocessing. Currently, multiprocessing is available only on Unix
systems (see this [issue](https://github.com/kensho-technologies/pyctcdecode/issues/65)).
... | batch_decode | python | huggingface/transformers | src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py | Apache-2.0 |
def as_target_processor(self):
"""
Temporarily sets the processor for processing the target. Useful for encoding the labels when fine-tuning
Wav2Vec2.
"""
warnings.warn(
"`as_target_processor` is deprecated and will be removed in v5 of Transformers. You can process yo... |
Temporarily sets the processor for processing the target. Useful for encoding the labels when fine-tuning
Wav2Vec2.
| as_target_processor | python | huggingface/transformers | src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py | Apache-2.0 |
def convert_s3prl_checkpoint(base_model_name, config_path, checkpoint_path, model_dump_path):
"""
Copy/paste/tweak model's weights to transformers design.
"""
checkpoint = torch.load(checkpoint_path, map_location="cpu", weights_only=True)
downstream_dict = checkpoint["Downstream"]
hf_config = ... |
Copy/paste/tweak model's weights to transformers design.
| convert_s3prl_checkpoint | python | huggingface/transformers | src/transformers/models/wavlm/convert_wavlm_original_s3prl_checkpoint_to_pytorch.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/convert_wavlm_original_s3prl_checkpoint_to_pytorch.py | Apache-2.0 |
def _get_feat_extract_output_lengths(
self, input_lengths: Union[torch.LongTensor, int], add_adapter: Optional[bool] = None
):
"""
Computes the output length of the convolutional layers
"""
add_adapter = self.config.add_adapter if add_adapter is None else add_adapter
... |
Computes the output length of the convolutional layers
| _get_feat_extract_output_lengths | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
attention_mask: Optional[torch.LongTensor] = None,
min_masks: int = 0,
) -> np.ndarray:
"""
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method f... |
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for
ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on
CPU as part of the preprocessing during training.
Args:
shape: T... | _compute_mask_indices | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def compute_num_masked_span(input_length):
"""Given input length, compute how many spans should be masked"""
num_masked_span = int(mask_prob * input_length / mask_length + epsilon)
num_masked_span = max(num_masked_span, min_masks)
# make sure num masked span <= sequence_length
i... | Given input length, compute how many spans should be masked | compute_num_masked_span | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be ... |
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def _mask_hidden_states(
self,
hidden_states: torch.FloatTensor,
mask_time_indices: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
):
"""
Masks extracted features along time axis and/or along feature axis according to
[S... |
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
| _mask_hidden_states | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
mask_time_indices: Optional[torch.FloatTensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optio... |
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*):
Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict
masked extracted features in *config.proj_codevector_dim* space.
| forward | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def __init__(self, config, target_lang: Optional[str] = None):
r"""
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`WavLMForCTC`] ... |
target_lang (`str`, *optional*):
Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or
adapter.<lang>.bin. Only relevant when using an instance of [`WavLMForCTC`] with adapters. Uses 'eng' by
default.
| __init__ | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def tie_weights(self):
"""
This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when
passing `target_lang=...` to `from_pretrained(...)`.
This method is **not** supposed to be called by the user and is prone to be changed in the future.... |
This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when
passing `target_lang=...` to `from_pretrained(...)`.
This method is **not** supposed to be called by the user and is prone to be changed in the future.
| tie_weights | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be r... |
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wavlm.parameters():
param.re... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
labels (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*):
Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to
the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size -... | forward | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be ... |
Calling this function will disable the gradient computation for the feature encoder so that its parameters will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wavlm.parameters():
param.re... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be r... |
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wavlm.parameters():
param.re... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
labels: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def freeze_feature_extractor(self):
"""
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
"""
warnings.warn(
"The method `freeze_feature_extractor` is deprecated and will be r... |
Calling this function will disable the gradient computation for the feature encoder so that its parameter will
not be updated during training.
| freeze_feature_extractor | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def freeze_base_model(self):
"""
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
"""
for param in self.wavlm.parameters():
param.re... |
Calling this function will disable the gradient computation for the base model so that its parameters will not
be updated during training. Only the classification head will be updated.
| freeze_base_model | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def _get_tdnn_output_lengths(self, input_lengths: Union[torch.LongTensor, int]):
"""
Computes the output length of the TDNN layers
"""
def _conv_out_length(input_length, kernel_size, stride):
# 1D convolutional layer output length formula taken
# from https://pyt... |
Computes the output length of the TDNN layers
| _get_tdnn_output_lengths | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def forward(
self,
input_values: Optional[torch.Tensor],
attention_mask: Optional[torch.Tensor] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
labels: Optional[torch.Tensor] = None... |
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`):
Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file
into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta... | forward | python | huggingface/transformers | src/transformers/models/wavlm/modeling_wavlm.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/wavlm/modeling_wavlm.py | Apache-2.0 |
def _get_generation_config(
is_multilingual: bool,
num_languages: int = 100,
openai_version: Optional[str] = None,
) -> GenerationConfig:
"""
Loads the appropriate generation config from HF repo
"""
if openai_version is not None:
repo = f"openai/whisper-{openai_version}"
elif not... |
Loads the appropriate generation config from HF repo
| _get_generation_config | python | huggingface/transformers | src/transformers/models/whisper/convert_openai_to_hf.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/convert_openai_to_hf.py | Apache-2.0 |
def remove_symbols_and_diacritics(s: str, keep=""):
"""
Replace any other markers, symbols, and punctuations with a space, and drop any diacritics (category 'Mn' and some
manual mappings)
"""
def replace_character(char):
if char in keep:
return char
elif char in ADDITION... |
Replace any other markers, symbols, and punctuations with a space, and drop any diacritics (category 'Mn' and some
manual mappings)
| remove_symbols_and_diacritics | python | huggingface/transformers | src/transformers/models/whisper/english_normalizer.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/english_normalizer.py | Apache-2.0 |
def _np_extract_fbank_features(self, waveform_batch: np.array, device: str) -> np.ndarray:
"""
Compute the log-mel spectrogram of the provided audio, gives similar results to Whisper's original torch
implementation with 1e-5 tolerance.
"""
if device != "cpu":
raise Va... |
Compute the log-mel spectrogram of the provided audio, gives similar results to Whisper's original torch
implementation with 1e-5 tolerance.
| _np_extract_fbank_features | python | huggingface/transformers | src/transformers/models/whisper/feature_extraction_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/feature_extraction_whisper.py | Apache-2.0 |
def _torch_extract_fbank_features(self, waveform: np.array, device: str = "cpu") -> np.ndarray:
"""
Compute the log-mel spectrogram of the audio using PyTorch's GPU-accelerated STFT implementation with batching,
yielding results similar to cpu computing with 1e-5 tolerance.
"""
w... |
Compute the log-mel spectrogram of the audio using PyTorch's GPU-accelerated STFT implementation with batching,
yielding results similar to cpu computing with 1e-5 tolerance.
| _torch_extract_fbank_features | python | huggingface/transformers | src/transformers/models/whisper/feature_extraction_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/feature_extraction_whisper.py | Apache-2.0 |
def zero_mean_unit_var_norm(
input_values: List[np.ndarray], attention_mask: List[np.ndarray], padding_value: float = 0.0
) -> List[np.ndarray]:
"""
Every array in the list is normalized to have zero mean and unit variance
"""
if attention_mask is not None:
attent... |
Every array in the list is normalized to have zero mean and unit variance
| zero_mean_unit_var_norm | python | huggingface/transformers | src/transformers/models/whisper/feature_extraction_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/feature_extraction_whisper.py | Apache-2.0 |
def __call__(
self,
raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]],
truncation: bool = True,
pad_to_multiple_of: Optional[int] = None,
return_tensors: Optional[Union[str, TensorType]] = None,
return_attention_mask: Optional[bool] = None,
... |
Main method to featurize and prepare for the model one or several sequence(s). Implementation uses PyTorch for
the STFT computation if available, otherwise a slower NumPy based one.
Args:
raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`):
... | __call__ | python | huggingface/transformers | src/transformers/models/whisper/feature_extraction_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/feature_extraction_whisper.py | Apache-2.0 |
def _median_filter(inputs: torch.Tensor, filter_width: int) -> torch.Tensor:
"""
Applies a median filter of width `filter_width` along the last dimension of the input.
The `inputs` tensor is assumed to be 3- or 4-dimensional.
"""
if filter_width <= 0 or filter_width % 2 != 1:
raise ValueErr... |
Applies a median filter of width `filter_width` along the last dimension of the input.
The `inputs` tensor is assumed to be 3- or 4-dimensional.
| _median_filter | python | huggingface/transformers | src/transformers/models/whisper/generation_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/generation_whisper.py | Apache-2.0 |
def _dynamic_time_warping(matrix: np.ndarray):
"""
Measures similarity between two temporal sequences: the input audio and the output tokens. Used to generate
token-level timestamps.
"""
output_length, input_length = matrix.shape
cost = np.ones((output_length + 1, input_length + 1), dtype=np.flo... |
Measures similarity between two temporal sequences: the input audio and the output tokens. Used to generate
token-level timestamps.
| _dynamic_time_warping | python | huggingface/transformers | src/transformers/models/whisper/generation_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/generation_whisper.py | Apache-2.0 |
def _extract_token_timestamps(
self, generate_outputs, alignment_heads, time_precision=0.02, num_frames=None, num_input_ids=None
):
"""
Calculates token-level timestamps using the encoder-decoder cross-attentions and dynamic time-warping (DTW) to
map each output token to a position i... |
Calculates token-level timestamps using the encoder-decoder cross-attentions and dynamic time-warping (DTW) to
map each output token to a position in the input audio. If `num_frames` is specified, the encoder-decoder
cross-attentions will be cropped before applying DTW.
Returns:
... | _extract_token_timestamps | python | huggingface/transformers | src/transformers/models/whisper/generation_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/generation_whisper.py | Apache-2.0 |
def generate(
self,
input_features: Optional[torch.Tensor] = None,
generation_config: Optional[GenerationConfig] = None,
logits_processor: Optional[LogitsProcessorList] = None,
stopping_criteria: Optional[StoppingCriteriaList] = None,
prefix_allowed_tokens_fn: Optional[Ca... |
Transcribes or translates log-mel input features to a sequence of auto-regressively generated token ids.
<Tip warning={true}>
Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the
model's default generation configuration. You ca... | generate | python | huggingface/transformers | src/transformers/models/whisper/generation_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/generation_whisper.py | Apache-2.0 |
def replace_or_add(lst: List[int], num: int, itr: Iterator[int]):
"""short function to replace num with a itr in lst"""
found = any(i in lst for i in itr)
if found:
lst = [num if i in itr else i for i in lst]
else:
lst.append(num)
... | short function to replace num with a itr in lst | replace_or_add | python | huggingface/transformers | src/transformers/models/whisper/generation_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/generation_whisper.py | Apache-2.0 |
def detect_language(
self,
input_features: Optional[torch.FloatTensor] = None,
encoder_outputs: Optional[Union[torch.FloatTensor, BaseModelOutput]] = None,
generation_config: Optional[GenerationConfig] = None,
num_segment_frames: int = 3000,
) -> torch.Tensor:
"""
... |
Detects language from log-mel input features or encoder_outputs
Parameters:
input_features (`torch.Tensor` of shape `(batch_size, feature_size, sequence_length)`, *optional*):
Float values of log-mel features extracted from the raw speech waveform. The raw speech waveform c... | detect_language | python | huggingface/transformers | src/transformers/models/whisper/generation_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/generation_whisper.py | Apache-2.0 |
def _retrieve_compression_ratio(tokens, vocab_size):
"""Compute byte length of zlib compressed token bytes vs. byte length of raw token bytes"""
length = int(math.log2(vocab_size) / 8) + 1
token_bytes = b"".join([t.to_bytes(length, "little") for t in tokens.tolist()])
compression_ratio =... | Compute byte length of zlib compressed token bytes vs. byte length of raw token bytes | _retrieve_compression_ratio | python | huggingface/transformers | src/transformers/models/whisper/generation_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/generation_whisper.py | Apache-2.0 |
def init_cache(self, batch_size, max_length, encoder_outputs):
r"""
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-r... |
Args:
batch_size (`int`):
batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache.
max_length (`int`):
maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized
... | init_cache | python | huggingface/transformers | src/transformers/models/whisper/modeling_flax_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_flax_whisper.py | Apache-2.0 |
def encode(
self,
input_features: jnp.ndarray,
attention_mask: Optional[jnp.ndarray] = None,
output_attentions: Optional[bool] = None,
output_hidden_states: Optional[bool] = None,
return_dict: Optional[bool] = None,
train: bool = False,
params: Optional[di... |
Returns:
Example:
```python
>>> from transformers import WhisperProcessor, FlaxWhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
>>> model = FlaxWhisperForConditiona... | encode | python | huggingface/transformers | src/transformers/models/whisper/modeling_flax_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_flax_whisper.py | Apache-2.0 |
def decode(
self,
decoder_input_ids,
encoder_outputs,
encoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_position_ids: Optional[jnp.ndarray] = None,
past_key_values: Optional[dict] = None,
ou... |
Returns:
Example:
```python
>>> from transformers import WhisperProcessor, FlaxWhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> import jax.numpy as jnp
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
... | decode | python | huggingface/transformers | src/transformers/models/whisper/modeling_flax_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_flax_whisper.py | Apache-2.0 |
def decode(
self,
decoder_input_ids,
encoder_outputs,
encoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_attention_mask: Optional[jnp.ndarray] = None,
decoder_position_ids: Optional[jnp.ndarray] = None,
past_key_values: Optional[dict] = None,
ou... |
Returns:
Example:
```python
>>> from transformers import WhisperProcessor, FlaxWhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en")
>>> model = FlaxWhisperForConditiona... | decode | python | huggingface/transformers | src/transformers/models/whisper/modeling_flax_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_flax_whisper.py | Apache-2.0 |
def _make_causal_mask(input_ids_shape: tf.TensorShape, past_key_values_length: int = 0):
"""
Make causal mask used for bi-directional self-attention.
"""
bsz = input_ids_shape[0]
tgt_len = input_ids_shape[1]
mask = tf.ones((tgt_len, tgt_len)) * LARGE_NEGATIVE
mask_cond = tf.range(shape_list(... |
Make causal mask used for bi-directional self-attention.
| _make_causal_mask | python | huggingface/transformers | src/transformers/models/whisper/modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_tf_whisper.py | Apache-2.0 |
def call(
self, hidden_states: tf.Tensor, attention_mask: tf.Tensor, layer_head_mask: tf.Tensor, training: bool = False
):
"""
Args:
hidden_states (`tf.Tensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`tf.Tensor`): attention mask of ... |
Args:
hidden_states (`tf.Tensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`tf.Tensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
layer_head_mas... | call | python | huggingface/transformers | src/transformers/models/whisper/modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_tf_whisper.py | Apache-2.0 |
def call(
self,
hidden_states,
attention_mask: tf.Tensor | None = None,
encoder_hidden_states: tf.Tensor | None = None,
encoder_attention_mask: tf.Tensor | None = None,
layer_head_mask: tf.Tensor | None = None,
cross_attn_layer_head_mask: tf.Tensor | None = None,
... |
Args:
hidden_states (`tf.Tensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`tf.Tensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
encoder_hidden... | call | python | huggingface/transformers | src/transformers/models/whisper/modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_tf_whisper.py | Apache-2.0 |
def dummy_inputs(self) -> Dict[str, tf.Tensor]:
"""
Dummy inputs to build the network.
Returns:
`Dict[str, tf.Tensor]`: The dummy inputs.
"""
return {
self.main_input_name: tf.random.uniform(
[1, self.config.num_mel_bins, self.config.max_s... |
Dummy inputs to build the network.
Returns:
`Dict[str, tf.Tensor]`: The dummy inputs.
| dummy_inputs | python | huggingface/transformers | src/transformers/models/whisper/modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_tf_whisper.py | Apache-2.0 |
def call(
self,
input_features=None,
head_mask=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
training=False,
):
r"""
Args:
input_features (`tf.Tensor` of shape `(batch_size, feature_size, sequence_length... |
Args:
input_features (`tf.Tensor` of shape `(batch_size, feature_size, sequence_length)`):
Float values of fbank features extracted from the raw speech waveform. Raw speech waveform can be
obtained by loading a `.flac` or `.wav` audio file into an array of type `List... | call | python | huggingface/transformers | src/transformers/models/whisper/modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_tf_whisper.py | Apache-2.0 |
def call(
self,
input_ids=None,
attention_mask=None,
position_ids=None,
encoder_hidden_states=None,
head_mask=None,
cross_attn_head_mask=None,
past_key_values=None,
inputs_embeds=None,
use_cache=None,
output_attentions=None,
... |
Args:
input_ids (`tf.Tensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using [`WhisperTokenizer`]. See [`PreTrained... | call | python | huggingface/transformers | src/transformers/models/whisper/modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_tf_whisper.py | Apache-2.0 |
def call(
self,
input_features=None,
decoder_input_ids=None,
decoder_attention_mask=None,
decoder_position_ids=None,
head_mask=None,
decoder_head_mask=None,
cross_attn_head_mask=None,
encoder_outputs=None,
past_key_values=None,
deco... |
Returns:
Example:
```python
>>> import tensorflow as tf
>>> from transformers import TFWhisperModel, AutoFeatureExtractor
>>> from datasets import load_dataset
>>> model = TFWhisperModel.from_pretrained("openai/whisper-base")
>>> feature_extracto... | call | python | huggingface/transformers | src/transformers/models/whisper/modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_tf_whisper.py | Apache-2.0 |
def call(
self,
input_features: TFModelInputType | None = None,
decoder_input_ids: np.ndarray | tf.Tensor | None = None,
decoder_attention_mask: np.ndarray | tf.Tensor | None = None,
decoder_position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Te... |
Returns:
Example:
```python
>>> import tensorflow as tf
>>> from transformers import TFWhisperModel, AutoFeatureExtractor
>>> from datasets import load_dataset
>>> model = TFWhisperModel.from_pretrained("openai/whisper-base")
>>> feature_extracto... | call | python | huggingface/transformers | src/transformers/models/whisper/modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_tf_whisper.py | Apache-2.0 |
def call(
self,
input_features: TFModelInputType | None = None,
decoder_input_ids: np.ndarray | tf.Tensor | None = None,
decoder_attention_mask: np.ndarray | tf.Tensor | None = None,
decoder_position_ids: np.ndarray | tf.Tensor | None = None,
head_mask: np.ndarray | tf.Te... |
labels (`tf.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
Labels for computing the language modeling loss. Indices should either be in `[0, ..., config.vocab_size]`
or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored (masked), the loss is
... | call | python | huggingface/transformers | src/transformers/models/whisper/modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_tf_whisper.py | Apache-2.0 |
def generate(
self,
inputs: Optional[tf.Tensor] = None,
generation_config: Optional[GenerationConfig] = None,
logits_processor: Optional[TFLogitsProcessorList] = None,
seed: Optional[List[int]] = None,
return_timestamps: Optional[bool] = None,
task: Optional[str] ... |
Generates sequences of token ids for models with a language modeling head.
<Tip warning={true}>
Most generation-controlling parameters are set in `generation_config` which, if not passed, will be set to the
model's default generation configuration. You can override any `generation_con... | generate | python | huggingface/transformers | src/transformers/models/whisper/modeling_tf_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_tf_whisper.py | Apache-2.0 |
def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
"""
Shift input ids one token to the right.
"""
shifted_input_ids = input_ids.new_zeros(input_ids.shape)
shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
shifted_input_ids[:, 0] = decoder_start_t... |
Shift input ids one token to the right.
| shift_tokens_right | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def _compute_mask_indices(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
attention_mask: Optional[torch.LongTensor] = None,
min_masks: int = 0,
) -> np.ndarray:
"""
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method f... |
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for
ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on
CPU as part of the preprocessing during training.
Args:
shape: T... | _compute_mask_indices | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def compute_num_masked_span(input_length):
"""Given input length, compute how many spans should be masked"""
num_masked_span = int(mask_prob * input_length / mask_length + epsilon)
num_masked_span = max(num_masked_span, min_masks)
# make sure num masked span <= sequence_length
i... | Given input length, compute how many spans should be masked | compute_num_masked_span | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor,
layer_head_mask: torch.Tensor,
output_attentions: bool = False,
) -> torch.Tensor:
"""
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: Optional[torch.Tensor] = None,
encoder_hidden_states: Optional[torch.Tensor] = None,
encoder_attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
cross_attn_l... |
Args:
hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
attention_mask (`torch.FloatTensor`): attention mask of size
`(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
... | forward | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def forward(
self,
input_features,
attention_mask=None,
head_mask=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
r"""
Args:
input_features (`torch.LongTensor` of shape `(batch_size, feature_size, seque... |
Args:
input_features (`torch.LongTensor` of shape `(batch_size, feature_size, sequence_length)`):
Float values of mel features extracted from the raw speech waveform. Raw speech waveform can be
obtained by loading a `.flac` or `.wav` audio file into an array of type ... | forward | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def forward(
self,
input_ids=None,
attention_mask=None,
encoder_hidden_states=None,
head_mask=None,
cross_attn_head_mask=None,
past_key_values=None,
inputs_embeds=None,
position_ids=None,
use_cache=None,
output_attentions=None,
... |
Args:
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you
provide it.
Indices can be obtained using [`WhisperTokenizer`]. See [`Pre... | forward | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def _mask_input_features(
self,
input_features: torch.FloatTensor,
attention_mask: Optional[torch.LongTensor] = None,
):
"""
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
"""
... |
Masks extracted features along time axis and/or along feature axis according to
[SpecAugment](https://arxiv.org/abs/1904.08779).
| _mask_input_features | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.Tensor] = N... |
input_features (`torch.FloatTensor` of shape `(batch_size, feature_size, sequence_length)`):
Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.nd... | forward | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.LongTensor] = None,
decoder_input_ids: Optional[torch.LongTensor] = None,
decoder_attention_mask: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.Tensor] = N... |
input_features (`torch.FloatTensor` of shape `(batch_size, feature_size, sequence_length)`):
Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.nd... | forward | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def forward(
self,
input_ids: Optional[torch.LongTensor] = None,
attention_mask: Optional[torch.Tensor] = None,
encoder_outputs: Optional[Tuple[torch.FloatTensor]] = None,
head_mask: Optional[torch.Tensor] = None,
cross_attn_head_mask: Optional[torch.Tensor] = None,
... |
encoder_outputs (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention
if the model is configured as a decoder.
cross_attn_head_mask (`torch.Te... | forward | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def forward(
self,
input_features: Optional[torch.LongTensor] = None,
head_mask: Optional[torch.Tensor] = None,
encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
labels: Optional[torch.LongTensor] = None,
output_attentions: Optional[bool] = None,
... |
input_features (`torch.FloatTensor` of shape `(batch_size, feature_size, sequence_length)`):
Float values mel features extracted from the raw speech waveform. Raw speech waveform can be obtained by
loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.nd... | forward | python | huggingface/transformers | src/transformers/models/whisper/modeling_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/modeling_whisper.py | Apache-2.0 |
def __call__(self, *args, **kwargs):
"""
Forwards the `audio` argument to WhisperFeatureExtractor's [`~WhisperFeatureExtractor.__call__`] and the `text`
argument to [`~WhisperTokenizer.__call__`]. Please refer to the docstring of the above two methods for more
information.
"""
... |
Forwards the `audio` argument to WhisperFeatureExtractor's [`~WhisperFeatureExtractor.__call__`] and the `text`
argument to [`~WhisperTokenizer.__call__`]. Please refer to the docstring of the above two methods for more
information.
| __call__ | python | huggingface/transformers | src/transformers/models/whisper/processing_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/processing_whisper.py | Apache-2.0 |
def get_pairs(word):
"""
Return set of symbol pairs in a word.
Word is represented as tuple of symbols (symbols being variable-length strings).
"""
pairs = set()
prev_char = word[0]
for char in word[1:]:
pairs.add((prev_char, char))
prev_char = char
return pairs |
Return set of symbol pairs in a word.
Word is represented as tuple of symbols (symbols being variable-length strings).
| get_pairs | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None) -> List[int]:
"""Build model inputs from a sequence by appending eos_token_id."""
if token_ids_1 is None:
return self.prefix_tokens + token_ids_0 + [self.eos_token_id]
# We don't expect to process pairs, but le... | Build model inputs from a sequence by appending eos_token_id. | build_inputs_with_special_tokens | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def get_special_tokens_mask(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
) -> List[int]:
"""
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens ... |
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
special tokens using the tokenizer `prepare_for_model` method.
Args:
token_ids_0 (`List[int]`):
List of IDs.
token_ids_1 (`List[int]`, *optional*)... | get_special_tokens_mask | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def _decode_with_timestamps(
self, token_ids, skip_special_tokens=False, time_precision=0.02, segment_size=1500
) -> str:
"""
Timestamp tokens are above the special tokens' id range and are ignored by `decode()`. This method decodes
given tokens with timestamps tokens annotated, e.g.... |
Timestamp tokens are above the special tokens' id range and are ignored by `decode()`. This method decodes
given tokens with timestamps tokens annotated, e.g. "<|1.08|>".
| _decode_with_timestamps | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def _compute_offsets(self, token_ids, time_precision=0.02, segment_size=1500):
"""
Compute offsets for a given tokenized input
Args:
token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
List of tokenized input ids. Can be obtained using the `__ca... |
Compute offsets for a given tokenized input
Args:
token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
List of tokenized input ids. Can be obtained using the `__call__` method.
time_precision (`float`, *optional*, defaults to 0.02):
... | _compute_offsets | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def _preprocess_token_ids(self, token_ids, skip_special_tokens: bool = False):
"""
Pre-process the token ids for decoding by removing the prompt tokens ids and timestamp token ids.
Args:
token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
List o... |
Pre-process the token ids for decoding by removing the prompt tokens ids and timestamp token ids.
Args:
token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`):
List of tokenized input ids. Typically, obtained using the `__call__` method of the tokenizer.
... | _preprocess_token_ids | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
def decode(
self,
token_ids,
skip_special_tokens: bool = False,
clean_up_tokenization_spaces: Optional[bool] = None,
output_offsets: bool = False,
time_precision: float = 0.02,
decode_with_timestamps: bool = False,
normalize: bool = False,
basic_no... |
Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special
tokens and clean up tokenization spaces.
Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`.
Args:
token_ids (`Union[int, List[int... | decode | python | huggingface/transformers | src/transformers/models/whisper/tokenization_whisper.py | https://github.com/huggingface/transformers/blob/master/src/transformers/models/whisper/tokenization_whisper.py | Apache-2.0 |
Subsets and Splits
Django Code with Docstrings
Filters Python code examples from Django repository that contain Django-related code, helping identify relevant code snippets for understanding Django framework usage patterns.
SQL Console for Shuu12121/python-treesitter-filtered-datasetsV2
Retrieves specific code examples from the Flask repository but doesn't provide meaningful analysis or patterns beyond basic data retrieval.
HTTPX Repo Code and Docstrings
Retrieves specific code examples from the httpx repository, which is useful for understanding how particular libraries are used but doesn't provide broader analytical insights about the dataset.
Requests Repo Docstrings & Code
Retrieves code examples with their docstrings and file paths from the requests repository, providing basic filtering but limited analytical value beyond finding specific code samples.
Quart Repo Docstrings & Code
Retrieves code examples with their docstrings from the Quart repository, providing basic code samples but offering limited analytical value for understanding broader patterns or relationships in the dataset.