code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_...
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained usi...
forward
python
huggingface/transformers
src/transformers/models/umt5/modeling_umt5.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/modeling_umt5.py
Apache-2.0
def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.Tensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.LongTensor] = None, head_mask: Optional[torch.Tensor] = None, ...
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained usi...
forward
python
huggingface/transformers
src/transformers/models/umt5/modeling_umt5.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/modeling_umt5.py
Apache-2.0
def forward( self, input_ids: Optional[torch.Tensor] = None, attention_mask: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, inputs_embeds: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[b...
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained usi...
forward
python
huggingface/transformers
src/transformers/models/umt5/modeling_umt5.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/modeling_umt5.py
Apache-2.0
def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, decoder_attention_mask: Optional[torch.BoolTensor] = None, head_mask: Optional[torch.FloatTensor] = N...
input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. UMT5 is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained usi...
forward
python
huggingface/transformers
src/transformers/models/umt5/modeling_umt5.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/umt5/modeling_umt5.py
Apache-2.0
def convert_unispeech_checkpoint( checkpoint_path, pytorch_dump_folder_path, config_path=None, dict_path=None, is_finetuned=True ): """ Copy/paste/tweak model's weights to transformers design. """ if config_path is not None: config = UniSpeechConfig.from_pretrained(config_path) else: ...
Copy/paste/tweak model's weights to transformers design.
convert_unispeech_checkpoint
python
huggingface/transformers
src/transformers/models/unispeech/convert_unispeech_original_pytorch_checkpoint_to_pytorch.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/convert_unispeech_original_pytorch_checkpoint_to_pytorch.py
Apache-2.0
def __init__(self, config): """ Implements adapter modules directly with 3D tensor weight as parameters and without using ModuleList to speed up training throughput. """ super().__init__() self.input_dim = config.adapter_attn_dim self.hidden_dim = config.hidden_si...
Implements adapter modules directly with 3D tensor weight as parameters and without using ModuleList to speed up training throughput.
__init__
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def _get_feat_extract_output_lengths(self, input_lengths: Union[torch.LongTensor, int]): """ Computes the output length of the convolutional layers """ def _conv_out_length(input_length, kernel_size, stride): # 1D convolutional layer output length formula taken #...
Computes the output length of the convolutional layers
_get_feat_extract_output_lengths
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def _compute_mask_indices( shape: Tuple[int, int], mask_prob: float, mask_length: int, attention_mask: Optional[torch.LongTensor] = None, min_masks: int = 0, ) -> np.ndarray: """ Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method f...
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on CPU as part of the preprocessing during training. Args: shape: T...
_compute_mask_indices
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def compute_num_masked_span(input_length): """Given input length, compute how many spans should be masked""" num_masked_span = int(mask_prob * input_length / mask_length + epsilon) num_masked_span = max(num_masked_span, min_masks) # make sure num masked span <= sequence_length i...
Given input length, compute how many spans should be masked
compute_num_masked_span
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def _mask_hidden_states( self, hidden_states: torch.FloatTensor, mask_time_indices: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.LongTensor] = None, ): """ Masks extracted features along time axis and/or along feature axis according to [S...
Masks extracted features along time axis and/or along feature axis according to [SpecAugment](https://arxiv.org/abs/1904.08779).
_mask_hidden_states
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, mask_time_indices: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optio...
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in *config.proj_codevector_dim* space.
forward
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be ...
Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training.
freeze_feature_extractor
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def compute_contrastive_logits( target_features: torch.FloatTensor, negative_features: torch.FloatTensor, predicted_features: torch.FloatTensor, temperature: int = 1, ): """ Compute logits for contrastive loss based using cosine similarity as the distance measure betw...
Compute logits for contrastive loss based using cosine similarity as the distance measure between `[positive_feature, negative_features]` and `[predicted_features]`. Additionally, temperature can be applied.
compute_contrastive_logits
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, UniSpeechForPreTraining...
Example: ```python >>> import torch >>> from transformers import AutoFeatureExtractor, UniSpeechForPreTraining >>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/unispeech-large-1500h-cv") >>> model = UniSpeechForPreTraining.from_pretrained("micros...
forward
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def __init__(self, config, target_lang: Optional[str] = None): r""" target_lang (`str`, *optional*): Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or adapter.<lang>.bin. Only relevant when using an instance of [`UniSpeechForCT...
target_lang (`str`, *optional*): Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or adapter.<lang>.bin. Only relevant when using an instance of [`UniSpeechForCTC`] with adapters. Uses 'eng' by default.
__init__
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def tie_weights(self): """ This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when passing `target_lang=...` to `from_pretrained(...)`. This method is **not** supposed to be called by the user and is prone to be changed in the future....
This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when passing `target_lang=...` to `from_pretrained(...)`. This method is **not** supposed to be called by the user and is prone to be changed in the future.
tie_weights
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be r...
Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.
freeze_feature_extractor
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.unispeech.parameters(): para...
Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated.
freeze_base_model
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: Optional[torch.Tensor] = None...
labels (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*): Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size -...
forward
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be ...
Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training.
freeze_feature_extractor
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.unispeech.parameters(): para...
Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated.
freeze_base_model
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: Optional[torch.Tensor] = None...
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta...
forward
python
huggingface/transformers
src/transformers/models/unispeech/modeling_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modeling_unispeech.py
Apache-2.0
def _get_feat_extract_output_lengths(self, input_lengths: Union[torch.LongTensor, int]): """ Computes the output length of the convolutional layers """ def _conv_out_length(input_length, kernel_size, stride): # 1D convolutional layer output length formula taken #...
Computes the output length of the convolutional layers
_get_feat_extract_output_lengths
python
huggingface/transformers
src/transformers/models/unispeech/modular_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modular_unispeech.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, mask_time_indices: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optio...
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in *config.proj_codevector_dim* space.
forward
python
huggingface/transformers
src/transformers/models/unispeech/modular_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modular_unispeech.py
Apache-2.0
def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be ...
Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training.
freeze_feature_extractor
python
huggingface/transformers
src/transformers/models/unispeech/modular_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modular_unispeech.py
Apache-2.0
def compute_contrastive_logits( target_features: torch.FloatTensor, negative_features: torch.FloatTensor, predicted_features: torch.FloatTensor, temperature: int = 1, ): """ Compute logits for contrastive loss based using cosine similarity as the distance measure betw...
Compute logits for contrastive loss based using cosine similarity as the distance measure between `[positive_feature, negative_features]` and `[predicted_features]`. Additionally, temperature can be applied.
compute_contrastive_logits
python
huggingface/transformers
src/transformers/models/unispeech/modular_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modular_unispeech.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, UniSpeechForPreTraining...
Example: ```python >>> import torch >>> from transformers import AutoFeatureExtractor, UniSpeechForPreTraining >>> feature_extractor = AutoFeatureExtractor.from_pretrained("microsoft/unispeech-large-1500h-cv") >>> model = UniSpeechForPreTraining.from_pretrained("micros...
forward
python
huggingface/transformers
src/transformers/models/unispeech/modular_unispeech.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech/modular_unispeech.py
Apache-2.0
def convert_s3prl_checkpoint(base_model_name, config_path, checkpoint_path, model_dump_path): """ Copy/paste/tweak model's weights to transformers design. """ checkpoint = torch.load(checkpoint_path, map_location="cpu", weights_only=True) downstream_dict = checkpoint["Downstream"] hf_config = ...
Copy/paste/tweak model's weights to transformers design.
convert_s3prl_checkpoint
python
huggingface/transformers
src/transformers/models/unispeech_sat/convert_unispeech_original_s3prl_checkpoint_to_pytorch.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/convert_unispeech_original_s3prl_checkpoint_to_pytorch.py
Apache-2.0
def convert_unispeech_sat_checkpoint( checkpoint_path, pytorch_dump_folder_path, config_path=None, dict_path=None, is_finetuned=True ): """ Copy/paste/tweak model's weights to transformers design. """ if config_path is not None: config = UniSpeechSatConfig.from_pretrained(config_path) el...
Copy/paste/tweak model's weights to transformers design.
convert_unispeech_sat_checkpoint
python
huggingface/transformers
src/transformers/models/unispeech_sat/convert_unispeech_sat_original_pytorch_checkpoint_to_pytorch.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/convert_unispeech_sat_original_pytorch_checkpoint_to_pytorch.py
Apache-2.0
def __init__(self, config): """ Implements adapter modules directly with 3D tensor weight as parameters and without using ModuleList to speed up training throughput. """ super().__init__() self.input_dim = config.adapter_attn_dim self.hidden_dim = config.hidden_si...
Implements adapter modules directly with 3D tensor weight as parameters and without using ModuleList to speed up training throughput.
__init__
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def _get_feat_extract_output_lengths(self, input_lengths: Union[torch.LongTensor, int]): """ Computes the output length of the convolutional layers """ def _conv_out_length(input_length, kernel_size, stride): # 1D convolutional layer output length formula taken #...
Computes the output length of the convolutional layers
_get_feat_extract_output_lengths
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def _compute_mask_indices( shape: Tuple[int, int], mask_prob: float, mask_length: int, attention_mask: Optional[torch.LongTensor] = None, min_masks: int = 0, ) -> np.ndarray: """ Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method f...
Computes random mask spans for a given shape. Used to implement [SpecAugment: A Simple Data Augmentation Method for ASR](https://arxiv.org/abs/1904.08779). Note that this method is not optimized to run on TPU and should be run on CPU as part of the preprocessing during training. Args: shape: T...
_compute_mask_indices
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def compute_num_masked_span(input_length): """Given input length, compute how many spans should be masked""" num_masked_span = int(mask_prob * input_length / mask_length + epsilon) num_masked_span = max(num_masked_span, min_masks) # make sure num masked span <= sequence_length i...
Given input length, compute how many spans should be masked
compute_num_masked_span
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def _mask_hidden_states( self, hidden_states: torch.FloatTensor, mask_time_indices: Optional[torch.FloatTensor] = None, attention_mask: Optional[torch.LongTensor] = None, ): """ Masks extracted features along time axis and/or along feature axis according to [S...
Masks extracted features along time axis and/or along feature axis according to [SpecAugment](https://arxiv.org/abs/1904.08779).
_mask_hidden_states
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, mask_time_indices: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optio...
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in *config.proj_codevector_dim* space.
forward
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be ...
Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training.
freeze_feature_extractor
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def compute_contrastive_logits( target_features: torch.FloatTensor, negative_features: torch.FloatTensor, predicted_features: torch.FloatTensor, temperature: int = 1, ): """ Compute logits for contrastive loss based using cosine similarity as the distance measure betw...
Compute logits for contrastive loss based using cosine similarity as the distance measure between `[positive_feature, negative_features]` and `[predicted_features]`. Additionally, temperature can be applied.
compute_contrastive_logits
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, UniSpeechSatForPreTrain...
Example: ```python >>> import torch >>> from transformers import AutoFeatureExtractor, UniSpeechSatForPreTraining >>> from transformers.models.unispeech_sat.modeling_unispeech_sat import _compute_mask_indices >>> feature_extractor = AutoFeatureExtractor.from_pretrained...
forward
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def __init__(self, config, target_lang: Optional[str] = None): r""" target_lang (`str`, *optional*): Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or adapter.<lang>.bin. Only relevant when using an instance of [`UniSpeechSatFo...
target_lang (`str`, *optional*): Language id of adapter weights. Adapter weights are stored in the format adapter.<lang>.safetensors or adapter.<lang>.bin. Only relevant when using an instance of [`UniSpeechSatForCTC`] with adapters. Uses 'eng' by default.
__init__
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def tie_weights(self): """ This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when passing `target_lang=...` to `from_pretrained(...)`. This method is **not** supposed to be called by the user and is prone to be changed in the future....
This method overwrites [`~PreTrainedModel.tie_weights`] so that adapter weights can be correctly loaded when passing `target_lang=...` to `from_pretrained(...)`. This method is **not** supposed to be called by the user and is prone to be changed in the future.
tie_weights
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be r...
Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.
freeze_feature_extractor
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.unispeech_sat.parameters(): ...
Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated.
freeze_base_model
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: Optional[torch.Tensor] = None...
labels (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*): Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size -...
forward
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be ...
Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training.
freeze_feature_extractor
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.unispeech_sat.parameters(): ...
Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated.
freeze_base_model
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: Optional[torch.Tensor] = None...
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta...
forward
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be r...
Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.
freeze_feature_extractor
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.unispeech_sat.parameters(): ...
Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated.
freeze_base_model
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None...
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta...
forward
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be r...
Calling this function will disable the gradient computation for the feature encoder so that its parameter will not be updated during training.
freeze_feature_extractor
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.unispeech_sat.parameters(): ...
Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated.
freeze_base_model
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def _get_tdnn_output_lengths(self, input_lengths: Union[torch.LongTensor, int]): """ Computes the output length of the TDNN layers """ def _conv_out_length(input_length, kernel_size, stride): # 1D convolutional layer output length formula taken # from https://pyt...
Computes the output length of the TDNN layers
_get_tdnn_output_lengths
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: Optional[torch.Tensor] = None...
input_values (`torch.FloatTensor` of shape `(batch_size, sequence_length)`): Float values of input raw speech waveform. Values can be obtained by loading a `.flac` or `.wav` audio file into an array of type `List[float]` or a `numpy.ndarray`, *e.g.* via the soundfile library (`pip insta...
forward
python
huggingface/transformers
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
Apache-2.0
def _get_feat_extract_output_lengths(self, input_lengths: Union[torch.LongTensor, int]): """ Computes the output length of the convolutional layers """ def _conv_out_length(input_length, kernel_size, stride): # 1D convolutional layer output length formula taken #...
Computes the output length of the convolutional layers
_get_feat_extract_output_lengths
python
huggingface/transformers
src/transformers/models/unispeech_sat/modular_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modular_unispeech_sat.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, mask_time_indices: Optional[torch.FloatTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optio...
mask_time_indices (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*): Indices to mask extracted features for contrastive loss. When in training mode, model learns to predict masked extracted features in *config.proj_codevector_dim* space.
forward
python
huggingface/transformers
src/transformers/models/unispeech_sat/modular_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modular_unispeech_sat.py
Apache-2.0
def freeze_feature_extractor(self): """ Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training. """ warnings.warn( "The method `freeze_feature_extractor` is deprecated and will be ...
Calling this function will disable the gradient computation for the feature encoder so that its parameters will not be updated during training.
freeze_feature_extractor
python
huggingface/transformers
src/transformers/models/unispeech_sat/modular_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modular_unispeech_sat.py
Apache-2.0
def compute_contrastive_logits( target_features: torch.FloatTensor, negative_features: torch.FloatTensor, predicted_features: torch.FloatTensor, temperature: int = 1, ): """ Compute logits for contrastive loss based using cosine similarity as the distance measure betw...
Compute logits for contrastive loss based using cosine similarity as the distance measure between `[positive_feature, negative_features]` and `[predicted_features]`. Additionally, temperature can be applied.
compute_contrastive_logits
python
huggingface/transformers
src/transformers/models/unispeech_sat/modular_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modular_unispeech_sat.py
Apache-2.0
def forward( self, input_values: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, UniSpeechSatForPreTrain...
Example: ```python >>> import torch >>> from transformers import AutoFeatureExtractor, UniSpeechSatForPreTraining >>> from transformers.models.unispeech_sat.modeling_unispeech_sat import _compute_mask_indices >>> feature_extractor = AutoFeatureExtractor.from_pretrained...
forward
python
huggingface/transformers
src/transformers/models/unispeech_sat/modular_unispeech_sat.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/unispeech_sat/modular_unispeech_sat.py
Apache-2.0
def mel_spectrogram(self, waveform: np.ndarray) -> np.ndarray: """ Calculates log MEL spectrograms from a batch of waveforms. Note that the input waveform(s) will be padded by `int(self.n_fft - self.hop_length) / 2` on both sides using the `reflect` padding mode. Args: wavef...
Calculates log MEL spectrograms from a batch of waveforms. Note that the input waveform(s) will be padded by `int(self.n_fft - self.hop_length) / 2` on both sides using the `reflect` padding mode. Args: waveform (`np.ndarray` of shape `(length,)`): The input wavefor...
mel_spectrogram
python
huggingface/transformers
src/transformers/models/univnet/feature_extraction_univnet.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/univnet/feature_extraction_univnet.py
Apache-2.0
def generate_noise( self, noise_length: int, generator: Optional[np.random.Generator] = None, ) -> np.ndarray: """ Generates a random noise sequence of standard Gaussian noise for use in the `noise_sequence` argument of [`UnivNetModel.forward`]. Args: ...
Generates a random noise sequence of standard Gaussian noise for use in the `noise_sequence` argument of [`UnivNetModel.forward`]. Args: spectrogram_length (`int`): The length (dim 0) of the generated noise. model_in_channels (`int`, *optional*, defaults...
generate_noise
python
huggingface/transformers
src/transformers/models/univnet/feature_extraction_univnet.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/univnet/feature_extraction_univnet.py
Apache-2.0
def batch_decode(self, waveforms, waveform_lengths=None) -> List[np.ndarray]: r""" Removes padding from generated audio after running [`UnivNetModel.forward`]. This returns a ragged list of 1D audio waveform arrays and not a single tensor/array because in general the waveforms will have differen...
Removes padding from generated audio after running [`UnivNetModel.forward`]. This returns a ragged list of 1D audio waveform arrays and not a single tensor/array because in general the waveforms will have different lengths after removing padding. Args: waveforms (`torch.Flo...
batch_decode
python
huggingface/transformers
src/transformers/models/univnet/feature_extraction_univnet.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/univnet/feature_extraction_univnet.py
Apache-2.0
def __call__( self, raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]], sampling_rate: Optional[int] = None, padding: Union[bool, str, PaddingStrategy] = True, max_length: Optional[int] = None, truncation: bool = True, pad_to_multiple_...
Main method to featurize and prepare for the model one or several sequence(s). Args: raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`): The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float ...
__call__
python
huggingface/transformers
src/transformers/models/univnet/feature_extraction_univnet.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/univnet/feature_extraction_univnet.py
Apache-2.0
def forward(self, spectrogram: torch.FloatTensor): """ Maps a conditioning log-mel spectrogram to a tensor of convolutional kernels and biases, for use in location variable convolutional layers. Note that the input spectrogram should have shape (batch_size, input_channels, seq_length). ...
Maps a conditioning log-mel spectrogram to a tensor of convolutional kernels and biases, for use in location variable convolutional layers. Note that the input spectrogram should have shape (batch_size, input_channels, seq_length). Args: spectrogram (`torch.FloatTensor` of ...
forward
python
huggingface/transformers
src/transformers/models/univnet/modeling_univnet.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/univnet/modeling_univnet.py
Apache-2.0
def forward( self, input_features: torch.FloatTensor, noise_sequence: Optional[torch.FloatTensor] = None, padding_mask: Optional[torch.FloatTensor] = None, generator: Optional[torch.Generator] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple[torch.FloatTen...
input_features (`torch.FloatTensor`): Tensor containing the log-mel spectrograms. Can be batched and of shape `(batch_size, sequence_length, config.num_mel_channels)`, or un-batched and of shape `(sequence_length, config.num_mel_channels)`. noise_sequence (`torch.FloatTensor`, *...
forward
python
huggingface/transformers
src/transformers/models/univnet/modeling_univnet.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/univnet/modeling_univnet.py
Apache-2.0
def forward( self, pixel_values: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, labels: Optional[torch.Tensor] = None, return_dict: Optional[bool] = None, ) -> Union[tuple, SemanticSegmenterOutput]...
labels (`torch.LongTensor` of shape `(batch_size, height, width)`, *optional*): Ground truth semantic segmentation maps for computing the loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels > 1`, a classification loss is computed (Cross-Entropy). ...
forward
python
huggingface/transformers
src/transformers/models/upernet/modeling_upernet.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/upernet/modeling_upernet.py
Apache-2.0
def resize( self, image: np.ndarray, size: Dict[str, int], resample: PILImageResampling = PILImageResampling.BILINEAR, data_format: Optional[Union[str, ChannelDimension]] = None, input_data_format: Optional[Union[str, ChannelDimension]] = None, **kwargs, ) -> ...
Resize an image. Args: image (`np.ndarray`): Image to resize. size (`Dict[str, int]`): Size of the output image. If `size` is of the form `{"height": h, "width": w}`, the output image will have the size `(h, w)`. If `size` is of t...
resize
python
huggingface/transformers
src/transformers/models/videomae/image_processing_videomae.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/videomae/image_processing_videomae.py
Apache-2.0
def preprocess( self, videos: ImageInput, do_resize: Optional[bool] = None, size: Optional[Dict[str, int]] = None, resample: PILImageResampling = None, do_center_crop: Optional[bool] = None, crop_size: Optional[Dict[str, int]] = None, do_rescale: Optional[...
Preprocess an image or batch of images. Args: images (`ImageInput`): Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set `do_rescale=False`. ...
preprocess
python
huggingface/transformers
src/transformers/models/videomae/image_processing_videomae.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/videomae/image_processing_videomae.py
Apache-2.0
def forward( self, pixel_values: torch.FloatTensor, bool_masked_pos: Optional[torch.BoolTensor] = None, head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = N...
bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, sequence_length)`, *optional*): Boolean masked positions. Indicates which patches are masked (1) and which aren't (0). Each video in the batch must have the same number of masked patches. If `None`, then all patches are consider...
forward
python
huggingface/transformers
src/transformers/models/videomae/modeling_videomae.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/videomae/modeling_videomae.py
Apache-2.0
def forward( self, pixel_values: torch.FloatTensor, bool_masked_pos: torch.BoolTensor, head_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Uni...
bool_masked_pos (`torch.BoolTensor` of shape `(batch_size, sequence_length)`): Boolean masked positions. Indicates which patches are masked (1) and which aren't (0). Each video in the batch must have the same number of masked patches. Sequence length is `(num_frames // tubelet_size) * ...
forward
python
huggingface/transformers
src/transformers/models/videomae/modeling_videomae.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/videomae/modeling_videomae.py
Apache-2.0
def forward( self, pixel_values: Optional[torch.Tensor] = None, head_mask: Optional[torch.Tensor] = None, labels: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = No...
labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for computing the image classification/regression loss. Indices should be in `[0, ..., config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If `config.n...
forward
python
huggingface/transformers
src/transformers/models/videomae/modeling_videomae.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/videomae/modeling_videomae.py
Apache-2.0
def resize( self, image: np.ndarray, size: Dict[str, int], resample: PILImageResampling = PILImageResampling.BICUBIC, data_format: Optional[Union[str, ChannelDimension]] = None, input_data_format: Optional[Union[str, ChannelDimension]] = None, **kwargs, ) -> n...
Resize an image. The shortest edge of the image is resized to size["shortest_edge"], with the longest edge resized to keep the input aspect ratio. Args: image (`np.ndarray`): Image to resize. size (`Dict[str, int]`): Size of the output im...
resize
python
huggingface/transformers
src/transformers/models/video_llava/image_processing_video_llava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/video_llava/image_processing_video_llava.py
Apache-2.0
def preprocess( self, images: Optional[List[ImageInput]] = None, videos: Optional[List[VideoInput]] = None, do_resize: Optional[bool] = None, size: Optional[Dict[str, int]] = None, resample: PILImageResampling = None, do_center_crop: Optional[bool] = None, ...
Preprocess an image or batch of images. Args: images (`ImageInput`, *optional*): List of images to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set `do_rescal...
preprocess
python
huggingface/transformers
src/transformers/models/video_llava/image_processing_video_llava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/video_llava/image_processing_video_llava.py
Apache-2.0
def get_image_features( self, pixel_values_images: torch.FloatTensor, vision_feature_layer: Optional[Union[int, List[int]]] = None, vision_feature_select_strategy: Optional[str] = None, ): """ Obtains image last hidden states from the vision tower and apply multimodal...
Obtains image last hidden states from the vision tower and apply multimodal projection. Args: pixel_values_images (`torch.FloatTensor]` of shape `(batch_size, channels, height, width)`) The tensors corresponding to the input images. vision_feature_layer (`Union[i...
get_image_features
python
huggingface/transformers
src/transformers/models/video_llava/modeling_video_llava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/video_llava/modeling_video_llava.py
Apache-2.0
def get_video_features( self, pixel_values_videos: torch.FloatTensor, vision_feature_layer: Optional[Union[int, List[int]]] = None, ): """ Obtains video last hidden states from the vision tower and apply multimodal projection. Args: pixel_values_videos (`...
Obtains video last hidden states from the vision tower and apply multimodal projection. Args: pixel_values_videos (`torch.FloatTensor]` of shape `(batch_size, num_frames, channels, height, width)`) The tensors corresponding to the input videos. vision_feature_lay...
get_video_features
python
huggingface/transformers
src/transformers/models/video_llava/modeling_video_llava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/video_llava/modeling_video_llava.py
Apache-2.0
def forward( self, input_ids: torch.LongTensor = None, pixel_values_images: torch.FloatTensor = None, pixel_values_videos: torch.FloatTensor = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, past_key_values: Op...
pixel_values_images (`torch.FloatTensor` of shape `(batch_size, num_channels, image_size, image_size)): The tensors corresponding to the input images. Pixel values can be obtained using [`AutoImageProcessor`]. See [`VideoLlavaImageProcessor.__call__`] for details ([]`LlavaProcessor`] us...
forward
python
huggingface/transformers
src/transformers/models/video_llava/modeling_video_llava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/video_llava/modeling_video_llava.py
Apache-2.0
def _prepare_4d_causal_attention_mask_with_cache_position( attention_mask: torch.Tensor, sequence_length: int, target_length: int, dtype: torch.dtype, cache_position: torch.Tensor, batch_size: int, **kwargs, ): """ Creates a causal 4D mask of s...
Creates a causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape `(batch_size, key_value_length)`, or if the input `attention_mask` is already 4D, do nothing. Args: attention_mask (`torch.Tensor`): A 2D attention mask of sh...
_prepare_4d_causal_attention_mask_with_cache_position
python
huggingface/transformers
src/transformers/models/video_llava/modeling_video_llava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/video_llava/modeling_video_llava.py
Apache-2.0
def __call__( self, text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None, images: ImageInput = None, videos: ImageInput = None, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = Non...
Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the `text` and `kwargs` arguments to LlamaTokenizerFast's [`~LlamaTokenizerFast.__call__`] if `text` is not `None` to encode the text. To prepare the image(s), this method forwards the `images` a...
__call__
python
huggingface/transformers
src/transformers/models/video_llava/processing_video_llava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/video_llava/processing_video_llava.py
Apache-2.0
def convert_vilt_checkpoint(checkpoint_url, pytorch_dump_folder_path): """ Copy/paste/tweak model's weights to our ViLT structure. """ # define configuration and initialize HuggingFace model config = ViltConfig(image_size=384, patch_size=32, tie_word_embeddings=False) mlm_model = False vqa_...
Copy/paste/tweak model's weights to our ViLT structure.
convert_vilt_checkpoint
python
huggingface/transformers
src/transformers/models/vilt/convert_vilt_original_to_pytorch.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/convert_vilt_original_to_pytorch.py
Apache-2.0
def make_pixel_mask( image: np.ndarray, output_size: Tuple[int, int], input_data_format: Optional[Union[str, ChannelDimension]] = None ) -> np.ndarray: """ Make a pixel mask for the image, where 1 indicates a valid pixel and 0 indicates padding. Args: image (`np.ndarray`): Image to ...
Make a pixel mask for the image, where 1 indicates a valid pixel and 0 indicates padding. Args: image (`np.ndarray`): Image to make the pixel mask for. output_size (`Tuple[int, int]`): Output size of the mask.
make_pixel_mask
python
huggingface/transformers
src/transformers/models/vilt/image_processing_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/image_processing_vilt.py
Apache-2.0
def get_max_height_width( images: List[np.ndarray], input_data_format: Optional[Union[str, ChannelDimension]] = None ) -> List[int]: """ Get the maximum height and width across all images in a batch. """ if input_data_format is None: input_data_format = infer_channel_dimension_format(images[...
Get the maximum height and width across all images in a batch.
get_max_height_width
python
huggingface/transformers
src/transformers/models/vilt/image_processing_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/image_processing_vilt.py
Apache-2.0
def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs): """ Overrides the `from_dict` method from the base class to make sure `pad_and_return_pixel_mask` is updated if image processor is created using from_dict and kwargs e.g. `ViltImageProcessor.from_pretrained(checkpoint, p...
Overrides the `from_dict` method from the base class to make sure `pad_and_return_pixel_mask` is updated if image processor is created using from_dict and kwargs e.g. `ViltImageProcessor.from_pretrained(checkpoint, pad_and_return_pixel_mask=False)`
from_dict
python
huggingface/transformers
src/transformers/models/vilt/image_processing_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/image_processing_vilt.py
Apache-2.0
def resize( self, image: np.ndarray, size: Dict[str, int], size_divisor: int = 32, resample: PILImageResampling = PILImageResampling.BICUBIC, data_format: Optional[Union[str, ChannelDimension]] = None, input_data_format: Optional[Union[str, ChannelDimension]] = No...
Resize an image. Resizes the shorter side of the image to `size["shortest_edge"]` while preserving the aspect ratio. If the longer side is larger than the max size `(int(`size["shortest_edge"]` * 1333 / 800))`, the longer side is then resized to the max size while preserving the aspect...
resize
python
huggingface/transformers
src/transformers/models/vilt/image_processing_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/image_processing_vilt.py
Apache-2.0
def _pad_image( self, image: np.ndarray, output_size: Tuple[int, int], constant_values: Union[float, Iterable[float]] = 0, data_format: Optional[ChannelDimension] = None, input_data_format: Optional[Union[str, ChannelDimension]] = None, ) -> np.ndarray: """ ...
Pad an image with zeros to the given size.
_pad_image
python
huggingface/transformers
src/transformers/models/vilt/image_processing_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/image_processing_vilt.py
Apache-2.0
def pad( self, images: List[np.ndarray], constant_values: Union[float, Iterable[float]] = 0, return_pixel_mask: bool = True, return_tensors: Optional[Union[str, TensorType]] = None, data_format: Optional[ChannelDimension] = None, input_data_format: Optional[Union[...
Pads a batch of images to the bottom and right of the image with zeros to the size of largest height and width in the batch and optionally returns their corresponding pixel mask. Args: image (`np.ndarray`): Image to pad. constant_values (`float` or `Iter...
pad
python
huggingface/transformers
src/transformers/models/vilt/image_processing_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/image_processing_vilt.py
Apache-2.0
def preprocess( self, images: ImageInput, do_resize: Optional[bool] = None, size: Optional[Dict[str, int]] = None, size_divisor: Optional[int] = None, resample: PILImageResampling = None, do_rescale: Optional[bool] = None, rescale_factor: Optional[float] =...
Preprocess an image or batch of images. Args: images (`ImageInput`): Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set `do_rescale=False`. ...
preprocess
python
huggingface/transformers
src/transformers/models/vilt/image_processing_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/image_processing_vilt.py
Apache-2.0
def _preprocess( self, images: list["torch.Tensor"], do_resize: bool, size: SizeDict, interpolation: Optional["F.InterpolationMode"], size_divisor: Optional[int], do_pad: bool, do_rescale: bool, rescale_factor: float, do_normalize: bool, ...
Preprocess an image or batch of images. This method overrides the base class method to include padding and pixel mask generation.
_preprocess
python
huggingface/transformers
src/transformers/models/vilt/image_processing_vilt_fast.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/image_processing_vilt_fast.py
Apache-2.0
def resize( self, images: "torch.Tensor", size: SizeDict, interpolation: Optional["F.InterpolationMode"] = None, size_divisor: Optional[int] = None, ) -> "torch.Tensor": """ Resize an image or batch of images to specified size. Args: image...
Resize an image or batch of images to specified size. Args: images (`torch.Tensor`): Image or batch of images to resize. size (`Dict[str, int]`): Size dictionary with shortest_edge key. interpolation (`F.InterpolationMode`, *optional*): Interpolation method to use. ...
resize
python
huggingface/transformers
src/transformers/models/vilt/image_processing_vilt_fast.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/image_processing_vilt_fast.py
Apache-2.0
def _pad_batch( self, images: list["torch.Tensor"], return_tensors: Optional[Union[str, TensorType]], ) -> tuple: """ Pad a batch of images to the same size based on the maximum dimensions. Args: images (`list[torch.Tensor]`): List of images to pad. ...
Pad a batch of images to the same size based on the maximum dimensions. Args: images (`list[torch.Tensor]`): List of images to pad. return_tensors (`str` or `TensorType`, *optional*): The type of tensors to return. Returns: `tuple`: Tuple containing padded ...
_pad_batch
python
huggingface/transformers
src/transformers/models/vilt/image_processing_vilt_fast.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/image_processing_vilt_fast.py
Apache-2.0
def __init__(self, config, add_pooling_layer=True): r""" add_pooling_layer (bool, *optional*, defaults to `True`): Whether to add a pooling layer """ super().__init__(config) self.config = config self.embeddings = ViltEmbeddings(config) self.encoder =...
add_pooling_layer (bool, *optional*, defaults to `True`): Whether to add a pooling layer
__init__
python
huggingface/transformers
src/transformers/models/vilt/modeling_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/modeling_vilt.py
Apache-2.0
def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, ...
image_embeds (`torch.FloatTensor` of shape `(batch_size, num_patches, hidden_size)`, *optional*): Optionally, instead of passing `pixel_values`, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `pixel_values` into pa...
forward
python
huggingface/transformers
src/transformers/models/vilt/modeling_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/modeling_vilt.py
Apache-2.0
def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, ...
image_embeds (`torch.FloatTensor` of shape `(batch_size, num_patches, hidden_size)`, *optional*): Optionally, instead of passing `pixel_values`, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `pixel_values` into pa...
forward
python
huggingface/transformers
src/transformers/models/vilt/modeling_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/modeling_vilt.py
Apache-2.0
def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, ...
image_embeds (`torch.FloatTensor` of shape `(batch_size, num_patches, hidden_size)`, *optional*): Optionally, instead of passing `pixel_values`, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `pixel_values` into pa...
forward
python
huggingface/transformers
src/transformers/models/vilt/modeling_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/modeling_vilt.py
Apache-2.0
def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, ...
image_embeds (`torch.FloatTensor` of shape `(batch_size, num_patches, hidden_size)`, *optional*): Optionally, instead of passing `pixel_values`, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `pixel_values` into pa...
forward
python
huggingface/transformers
src/transformers/models/vilt/modeling_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/modeling_vilt.py
Apache-2.0
def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, ...
image_embeds (`torch.FloatTensor` of shape `(batch_size, num_patches, hidden_size)`, *optional*): Optionally, instead of passing `pixel_values`, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `pixel_values` into pa...
forward
python
huggingface/transformers
src/transformers/models/vilt/modeling_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/modeling_vilt.py
Apache-2.0
def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, pixel_values: Optional[torch.FloatTensor] = None, pixel_mask: Optional[torch.LongTensor] = None, ...
image_embeds (`torch.FloatTensor` of shape `(batch_size, num_patches, hidden_size)`, *optional*): Optionally, instead of passing `pixel_values`, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert `pixel_values` into pa...
forward
python
huggingface/transformers
src/transformers/models/vilt/modeling_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/modeling_vilt.py
Apache-2.0
def __call__( self, images, text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None, add_special_tokens: bool = True, padding: Union[bool, str, PaddingStrategy] = False, truncation: Union[bool, str, TruncationStrategy] = None, ma...
This method uses [`ViltImageProcessor.__call__`] method to prepare image(s) for the model, and [`BertTokenizerFast.__call__`] to prepare text for the model. Please refer to the docstring of the above two methods for more information.
__call__
python
huggingface/transformers
src/transformers/models/vilt/processing_vilt.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vilt/processing_vilt.py
Apache-2.0
def get_image_features( self, pixel_values: torch.FloatTensor, vision_feature_layers: Optional[Union[int, List[int]]] = None ): """ Obtains image last hidden states from the vision tower and apply multimodal projection. Args: pixel_values (`torch.FloatTensor]` of shape `...
Obtains image last hidden states from the vision tower and apply multimodal projection. Args: pixel_values (`torch.FloatTensor]` of shape `(batch_size, channels, height, width)`) The tensors corresponding to the input images. vision_feature_layers (`Union[int, Li...
get_image_features
python
huggingface/transformers
src/transformers/models/vipllava/modeling_vipllava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vipllava/modeling_vipllava.py
Apache-2.0
def forward( self, input_ids: torch.LongTensor = None, pixel_values: torch.FloatTensor = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[List[torch.FloatTensor]] = None, inputs_embeds:...
vision_feature_layers (`Union[int, List[int]]`, *optional*): The vision feature layer, or the list of indexes of the layers to select the vision feature.
forward
python
huggingface/transformers
src/transformers/models/vipllava/modeling_vipllava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vipllava/modeling_vipllava.py
Apache-2.0
def forward( self, input_ids: torch.LongTensor = None, pixel_values: torch.FloatTensor = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[List[torch.FloatTensor]] = None, inputs_embeds:...
labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored ...
forward
python
huggingface/transformers
src/transformers/models/vipllava/modeling_vipllava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vipllava/modeling_vipllava.py
Apache-2.0
def get_image_features( self, pixel_values: torch.FloatTensor, vision_feature_layers: Optional[Union[int, List[int]]] = None ): """ Obtains image last hidden states from the vision tower and apply multimodal projection. Args: pixel_values (`torch.FloatTensor]` of shape `...
Obtains image last hidden states from the vision tower and apply multimodal projection. Args: pixel_values (`torch.FloatTensor]` of shape `(batch_size, channels, height, width)`) The tensors corresponding to the input images. vision_feature_layers (`Union[int, Li...
get_image_features
python
huggingface/transformers
src/transformers/models/vipllava/modular_vipllava.py
https://github.com/huggingface/transformers/blob/master/src/transformers/models/vipllava/modular_vipllava.py
Apache-2.0